.data.xml file.
Finally you have to adjust the database connection string which differs inside and outside P1. For that I think it should be okay if you simply replace all occurrences of ATONR_COOL with ATLAS_COOLPROD.
Hope that will work …
Cheers, Martin
Calibration processing code
https://gitlab.cern.ch/atlas-l1calo/CalibrationProcessing
CPM Input scan
- taken from the calibration panel (few minutes) If there are errors about the CMX doesn't matter (becaue the CPM changes the clock). Results can be found in the crate (e.g. ccc02 in /logs/tdaq-06-01-01/L1CaloCalibration/results)
CP Run2
Scale factor of 2 for the EM energies only between meu run 1 and run 2, as in Run 2 we increased the energy resolution from 1 GeV to 500 MeV in the L1Calo CP system.
JeM Input scan
- again taken from calib pael. Same as befre fr CMX errors. Results from the L1Calo twiki, Calib --> Status --> and i can find the log file or the pdf with results. Red is the band forbidden. Blue bands are missing chanels
Calibration
Tile CIS
Sanya's instructions:
Yes, you are running with UPD4 constants
To see evolution of laser constants for one particular channel and
intervals for different IOVs you can simply do
asetup 21.0.71,Athena
ReadCalibFromCool.py --folder=/TILE/OFL02/CALIB/LAS/LIN --module=LBA10
--chan=20 --gain=0 --pmt --begin=343000
TileCalibTools : INFO Resolved globalTag 'CONDBR2-BLKPA-2018-06' to
folderTag 'TileOfl02CalibLasLin-RUN2-UPD4-16'
(341222,0) LBA10 pm 21 ch 20 LG 0.944506 646.900024
(345808,0) LBA10 pm 21 ch 20 LG 1.000000 -1.000000
(347606,0) LBA10 pm 21 ch 20 LG 1.017112 646.900024
(348534,0) LBA10 pm 21 ch 20 LG 1.013614 646.900024
(349137,0) LBA10 pm 21 ch 20 LG 0.989601 646.900024
(349712,0) LBA10 pm 21 ch 20 LG 0.968799 646.900024
(350144,0) LBA10 pm 21 ch 20 LG 0.957218 646.900024
to be sure that these constants are the same as in your reconstruction
job, you can add --tag parameter
ReadCalibFromCool.py --folder=/TILE/OFL02/CALIB/LAS/LIN
--tag=CONDBR2-BLKPA-2018-03 --module=LBA10 --chan=20 --gain=0 --pmt
--begin=343000
and you'll see that they are indeed the same, because:
TileCalibTools : INFO Resolved globalTag 'CONDBR2-BLKPA-2018-03' to
folderTag 'TileOfl02CalibLasLin-RUN2-UPD4-16'
other constants do not change much, but in case you want to check them -
you can use
--folder=/TILE/OFL02/CALIB/CIS/LIN
--folder=/TILE/OFL02/CALIB/CES
BTW, in ReadCalibFromCool.py you can use short tag parameter:
--tag=UPD1 or --tag=UPD4
which will show you constants for latest UPD1 and UPD4 (UPD4 is assumed
by default)
Check masked Tile channels
To check history of masking simply do something like
ReadBchFromCool.py --tag=UPD4 --module=LBC56 --chan=10 --pmt --begin=300000
or, if you are not sure about channel number, do first
ReadBchFromCool.py --module=LBC56 --pmt --begin=300000 | grep -v good
the channel number to PMT number map is available here: http://zenis.dnp.fmph.uniba.sk/tile.html
Receivers
HV correction
https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloOnCallManual#Updating_Receiver_gains_and_LAr
https://indico.cern.ch/event/638444/contributions/2639266/attachments/1490522/2316697/bracinik10jul17_calibStatus.pdf
https://cds.cern.ch/record/830849/files/ATL-COM-LARG-2005-003.pdf?version=1
Change receivers gains
- with the new sqlite file from juraj: (instruction here: /det/l1calo/doc) *Juraj provides files (update: from analysis authomatic, force: values put by hand because problematic channels). to be copied in
/det/l1calo/coolData/gains
cd /det/l1calo/coolData/gains
../../scripts/updateGains.sh updateLAr_mar16_v1.sqlite
- To revert or change some channel, go in P1, setup l1calo, ace, open DB, search for receiver folder and look for the channel (name of receiver channels, not TT).
Monitor receivers inputs
https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloReceiversExpert#How_to_configure_the_Monitoring
Change Firmware
PPROTEST OLD VERSION
There is a version already installed in P1 in /det/l1calo/ppmTools/pprotest
CALIPPR FW
- Get the new version of PPROTESTatCERN from Jan (if it has been changed recently), copy in lxplus and compile it (after setup_pprotest)
- compress the folder, copy in P1 and decompress
tar -zcvf archive.tar.gz directory/
tar -zxvf archive.tar.gz
- Jan's twiki here: https://wiki.kip.uni-heidelberg.de/KIPwiki/index.php/Atlas_Privat:JJ_HowTo_P1FWupdate
- bit files are (the most recent) in the folder:
/afs/cern.ch/work/j/jjongman/public/bitFiles
. Take the needed verion and put in the pprotest package in bitFiles/
- Check in the file:
ppmServices/src/PpmFpga.cxx
at the very end, that the FW version is the wanted one (Currently used End 2016: RemFPGA_6.a.0.bit, Calippr_2.4.5.bit
- If need to come back to another FW version: reput in the cxx file the new bit files, recompile, make tar, move to P1, untar, upload in the sbcs
- Check the currect PPM slot mask in the file
PPROTESTatCERN/ppmServices/apps/EepromProgramming.cxx
// Board SlotNR Mask 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
unsigned char ppmMask[16] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}; // P1
// unsigned char ppmMask[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0}; // ZDC P1
// unsigned char ppmMask[16] = {0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0}; // TestRig
// unsigned char ppmMask[16] = {0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0}; // TestRig - L1CaloTest Partition only
unsigned char mcmMask[16] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
- enter in P1 network. Create 8 xterm and in each of them log in the sbcs:
sbc-l1c-pcc-0(0-7)
(or use the alias pcc0n
- In each of them:
setup_pprotest
and cd PPROTESTatCERN
-
./EepromProgramming
. It come out a menu like this one:
- to load CALIPP FW:
- select (1) and give 16, all PPM.
- select (6) load CALIPP to EPROM, give nMCm number; 16 (all)
- Insert the magic key (58).
- It starts to program and take ~ 45 min. Once it is done (check it did 16 PPM otherwise the mask was wrong!).
- Once it is done:
- Select (7): reload CALIPP from EPROM to FPGA.
- After that one can check that everything went fine:
- Select again all PPM and do (9). One should read the correct version in each of the 16 PPMs
- Never do the JTAG programming! it takes forever because it does in series ever MCM. (once I did by mistake, need to stop it, load the FW in the FPGA (only in the first PPM ) with 7 and then restart with 6 and then 7
- I I cannot communicate with the VME crate maybe is becuse the crate has been shutdown, I need to reload the REM first (it does automatically when I run the standalone partition, but If I don't do it, I can do with the Eeprogramming)
- Select (2) : Load REM firmware
- Load from FLASH memory of from softare? (f/s)? Select s
- then I can update the Calipp or read the info from the crate
REM Firmware
./FlashProgramming
http://franchin.web.cern.ch/franchin/L1calo/Tests_FW_Jan/REM_ipload/REM_upload_options
Chose the options. The first time you chose all PMTs, then need to first do option (3) that loads the FW in one of the 6 slots. Select option 0: REM and select the wanted slot. (End 2017 in use is slot 5 with REM verson 6.b.0 (in 2016: 6.a.0 in slot 4)). Is OKS who choses which slot to use for the run, the others once uploaded with a version can keep the version for next use. If I do th option (4) I move the FW in the FPGA but this will be reset at the next configire of the run because will read from OKS which oe to use. If I do option (6) I read what is in the FPGA, so if I upload the FW in the FLASH but without moving to the FPGA it doesn't read it.
The OKS file can be seen in git here
(search for flash-blockN, whith N from 0 to 5). N.B in the pprotest the counting is between 1 to 6 so pay attention of shifting by one what is in OKS wrt to the REM slot!
LCD = lvds cable driver, it does some precompensation and fan-out of the lvds signals coming from the nMCMs going out to the CP & JEP, it is the bottom right daughterboard on the ppm
If there are VME errors, try loading the ReM FPGA using the ./EepromProgramming tool and the 'via software' option, then using the VME reset. This brings the modules into a cleanly configured state where you have access to all the memory modules.
NEW PPROTEST.GIT
Software under development by Victor, some improvements with respect as before (not need to compile if mapping has been changed or if one bit file is added, possibility to chose bit file while running it.
* Calipp FW:
./PROGR_McmEeproms
http://franchin.web.cern.ch/franchin/L1calo/Tests_FW_Jan/LoadFW/PROGRAM_McmEeproms
http://franchin.web.cern.ch/franchin/L1calo/Tests_FW_Jan/LoadFW/PROGRAM_McmEeproms
/.PROGR_FPGAs
PPM
1 mV = 1 ADC cnt = 250 MeV ---> Since there are ~ 1000 ADC channels, the ADC saturates at ~ 250 GeV, 2.5V
1GeV=4096 ADC counts
IN LAr electronics analogic saturation happens, but later (the linear mixer saturates first at 3.3 V). However, usually when analogue saturation occurs, the pulse shape distorts and this would produce problems for our BCID algorithms. For L1Calo we rely on ‘clean saturation’ which means we need linearly rising edge of the signal, and clean clipping in case of saturation. This is done by the restriction to 2.5 V. The limitation to 2.5V in done on the AnIn by the differential line receiver, which converts with unity gain the input signals to single-ended pulses with max 2.5V.
The saturation level in data is set to 1020. The reason indeed to take potential bit errors into account. The saturation parameter is a database parameter and can be found in the PprChanDefaults folder.
PPM Mapping
- EM
- HEC
- Tile
- FCAL: 4 PPMs: (crates 4 & 5, PPMs 0 & 8)
- SPARES: 4 PPMs:
PPM Calibration
- DAC scan The pedestal is adjusted and measured on regular basis by the combination of DAC scans and pedestal runs. First the linear dependency between the DAC setting (0-255) and the ADC value is measured by scanning the DAC range and measuring the ADC output for each point. An analysis determines for each tower the slope and offset of the linear function and stores them in the database. Using these parameters, the DAC setting is determined and loaded at CONFIGURE in order to result in a pedestal of 32. The DAC calibration is needed to compensate for differences of the ADCs.
- PED scan Since the precision of the DAC is coarse than the precision of the ADC (approximately factor of 2) we cannot rely on the DAC calibration alone in order to ‘predict’ the pedestal precisely enough to be used as zero-line in the LUT. Hence in second step after the DAC calibration we take a pedestal run which simple measures the pedestal, histograms it and determines mean and rms.
These measure pedestal values turn out to be roughly at 32 of course, with some variations.
Hardware
Power cycle single PPM
It cannot be done via DCS. Need to log in the sbc crate and type:
ppmOff #
where # is the PPM number. It does the OFF- and ON. Can be monitored if this has been done opening the DCS and looking at the temperature of some MCM in the correct PPM. You will see a sudden drop in temperature and then a slow increase. In order to have the system back to the initial shape the temperature need to be back otherwise pedestls might shift. To worm up a bit the PPM a run in standalone partition can be done for some minutes.
L1Calo Readout modes
- Default: 5+1 (7+1 used for filter coefficient calculation, 15+1 80MHz used for timing purposes, special runs)
- COOL datbase with parameters elog
- Deadtime settings for each readout mode: l1whiteboard
- Parameters: NumSamples40, ADC_latency, NumAdcSamples elog
- Event size: (Steve): With the standard five slices each PPM slink has a data size of about 220 words - see for example this:
https://atlasdqm.cern.ch/webdisplay/tier0/1/physics_L1Calo/run_364214/run/L1Calo/ROD/rod_1d_PpPayload
so that's 220*32*4 bytes. So I reckon our normal legacy data size must be about 30 kBytes. (~ 3% of ATLAS).
So in the recent runs with 15 slices, on emptyish events (which approximates to physics!) the equivalent PPM size is about 500 words. So that gives us 500*32*4, ie about
65 kByes in all. So overall we roughly double our data size from 30 kBytes per event to 65 kBytes per event.
Change Low-High mu settings: AC (high mu), matched filters (low mu)
matched filters were used in Run 1 and are now used only for single bunches filling schema and heavy ions.
Expectations in terms of trigger rate (Steve): with matched filters filters: EM and Tau triggers to be pretty much as normal, all others items with higher rate than with AC filters. Low-pt forward jet triggers
and XE triggers, I wouldn't be at all surprised if those were running at 10 times normal rates (or more) than with bunch trains. Effect well visible on L1_EM_EMPTY (noise), with single bunches is few Hz (matched filters, optimised against noise), with trains and AC filters it goes up to 30KHz..
(from 2017, see instructions in oncall manual
We can check from the oncall L1Calo page if we are runing with high mu or low mu configuration from here: https://atlasop.cern.ch/oncall/l1calo/L1CaloModStat.php
2018 low mu runs, noise cuts in the database:
EM : Flat 4000
HAD: |eta| < 1.6 has 6000, |eta| > 1.6 has 5000
Some single towers have higher values,
(Instructions from Martin for 2016 run, how to do manually)
- Loading low-mu matched filters, LUT slopes and noise cuts:
- (a) First step is usually to upload the three coolinit files to the corresponding results folders. In this case I have done this already so you can directly proceed with the validation in (b). However, when reverting to high-mu settings you will have to do this step with proper high-mu files of course, and instructions are given below. Nevertheless, for completeness and emergencies, the files for low-mu
settings can be found in
/det/l1calo/coolData/acfirsglb_physics/single_bunch
and are called
- PprFirFilterResults_Physics_single_bunch.coolinit
- PprLutValuesResults_Physics_single_bunch.coolinit
- PprNoiseCutResults_Physics_single_bunch.coolinit
-
- (b) Validation of results. In this step the matching attributes of the specified results folders are copied to the validated folder PprChanCalib (or PprChanExtra, respectively). The latter folders are used to configure the system, so this step will have impact on the performance. Hence, before running the validation it might also be a good idea to dump the previous status of PprChanCalib (PprChanExtra) which could make reversion in case of accidents easier. For this I usually create a 'work' folder. Below an example sequence of commands.
> cd /det/l1calo/coolData/acfirsglb_physics/single_bunch
> mkdir work_201609xx
> cd work_201609xx
> dumpfolder.py -f Physics -i PprChanCalib
> mv PprChanCalib_Physics.coolinit PprChanCalib_Physics_201609xx_before.coolinit
Now validate the three results folders.
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprFirFilterResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprLutValuesResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprNoiseCutResults
- II. Loading high-mu AC filters, LUT slopes and noise cuts:
(a) Upload the three coolinit files to the corresponding results folders. The results folders are used for storing calibration results without actually using them. So no changes to the performance of the
system are done in this step. The coolinit files with the AC filters can be found in /det/l1calo/coolData/acfir25ns_physics/20160726/acMu29.
> cd /det/l1calo/coolData/acfir25ns_physics/20160726/acMu29
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprFirFilterResults_Physics_20160721_ACmu29.coolinit
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprLutValuesResults_Physics_20160721_ACmu29.coolinit
> loadfolder.py -d "$L1CALO_DB_CONNECT" -f Physics PprNoiseCutResults_Physics_20160721_ACmu29.coolinit
- (b) Validate the results. In this step the matching attributes of the specified results folders are copied to the validated folder PprChanCalib (or PprChanExtra, respectively). The latter folders are used to configure the system, so this step will have impact on the performance. Hence, before running the validation it might also be a good idea to dump the previous status of PprChanCalib (PprChanExtra) which could make reversion in case of accidents easier. For this I usually create a 'work' folder. Below an example sequence of commands.
> cd /det/l1calo/coolData/acfir25ns_physics/20160721/
> mkdir work_201609xx
> cd work_201609xx
> dumpfolder.py -f Physics -i PprChanCalib
> mv PprChanCalib_Physics.coolinit PprChanCalib_Physics_201609xx_before.coolinit
Now validate the three results folders.
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprFirFilterResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprLutValuesResults
> useCalib -d "$L1CALO_DB_CONNECT" -f Physics -v PprChanCalib -r PprNoiseCutResults
IDs read as 0x0c1m0s0h, where c is the crate number, m is the module (i.e. PPM) number, s is the submodule (i.e. MCM) number and h is the channel number
PPM test, software patches
- Usually Martin put the patches in P1 here:
/det/l1calo/ppmTools/patches
- Using su l1calo copy the interesting libray (renaming it without the extension, in
/det/l1calo/releases/tdaq-06-01-01/l1calo/pro/installed/patches/x86_64-slc6-gcc49-opt/lib/
- log files are in each sbc crates (sbc-l1c-pcc-0n) here:
/logs/tdaq<version>/<partition>
HDMC (Hardware Diagnostic Monitoring and Control system)
- From the sbc open it with
dbhdmc
and open the correct ppm crate (depending from which sbc it is lounched)
- The igui will open, File --> LoadDBcrate --> Select the ppm crate, ppm, mcm and channel. Open the interested register and click on Read
Make some hot towers via LUT
maybe is needed for Topo testing, changing via hardware the value of the pedestal and boost it toward the saturation (in some dR region that could be useful)
- login in the correct sbc in P1 while the test run in ongoing (ssh -Y sbc-l1c-pcc-0c)
-
dbhdmc
after having setup_l1calo
- chose the right MCM, and channel, and chose either the lutCP or lutJEP. Open. Read values, There is a list of them. take the last value with zeroes and fill with ffff. The point
where is changes to non-zero values is where the noise cut is. Change the values just below the noise cut to FF's, and click the
'Write' button to commit the change to the hardware memory.
- It should be suddenly effective and no need to put back. once the PPM is configured in the next run it reads from the DB the correct value
- The tower should become hot in the mapping tool.
- PPM LUT:
Pedestal Corrections
effect of correcting for 'average' activity in any particular bunch-crossing. Applied to the FIR filter output before the LUT step
To enable/disable it in ACE: TRIGGER/L1Calo/V2/Configuration/PprChanDefaults/CR43_PedCorrEnable
CPM-JEP
Overflows
JEM case. If there are e.g. 6 jets in the event (especially at the beginning of the run and in the fw region) the backplane format can only send four jets so there is an overflow condition: six jets would be reported in the “presence bits” but only the first four have details of large/small Et. Only the first four appear in the CMX readout (with readout overflow bit set). Since we do not know if the missed jets might have high Et, in this case the firmware sets the maximum in all 25 jet thresholds. The rate of these can be quite high at the beginning of a run, so of the order of a thousand events of this type per run is not uncommon. Jet overflows are almost always in FCAL, the only exceptions tend to be noise bursts. There's also a slightly lower rate of EM overflows, which conversely tend to be in the central barrel region.
Sometimes there are simulation mismathes because the simulatin simulates the correct energy even if there are more than 4 jets
CMX Ovrflows
HI 2018 runs. lot of overflows. Asymmetric distribution (e.g. see here: https://atlasdaq.cern.ch/info/mda/db/ATLAS/365304/Histogramming.l1calo-athenaHLT./L1Calo/CPM_CMX/Input/cmx_2d_tob_Overflow
) confirmed by Steve: " the difference is CMX 0 deals with TAU TOBs and CMX 1 is for EM TOBS. The TAU minimum TOB threshold is incredibly
low, whereas the EM is boosted compared to standard running to keep the overflows down. So we will definitely see this asymmetry during HI. "
To see which L1 Topo item is firing, look online page, CTP rates, search for "-" (topo items should be with a minus in their names, see screenshot below:
Topo Caldo
Tool to generate hot towers to test Topo algorithms but also useful to test something else...
Instructions from Rosa:
connect to pc-l1c-topo-00
either go in /atlas-home/0/simoniel/executables/DANGEROUSTOOLS
or copy /atlas-home/0/simoniel/executables/DANGEROUSTOOLS/topoCaldo.py in your home
for test with LAr no need of any special keys or any special configuration (they already set up ATLAS as they need)
run: python topoCaldo.py
from the menu “Partition” select “ATLAS”
from the menu “Pattern” select “TwoEm90” - when you select the pattern the hot tower will start *automatically* to stop it you need to select “None” from the “Pattern" menu. This process will cause a lot of errors messages to the run control but it is exactly what LAr people want ;)
Not necessary but if you are curious: the time stamp on when you activate any pattern can be monitor from or L1Calo on-call page in the ERS messages filtering for “Undefined” for the application ID
Errors while running
L1Calo webpages links
some encountered errors
, ModSTat if see any red light. Depending how serious is the error one can think to do a TTC restart to fix it
Topo (2017)
- FATAL due to IPbus transition failure (error at CONNECT): 1/2 time per week. to go down and up again with the ATLAS partition.
- l1calo-topo-l1topo1 error (Access attempted on non-validated memory). --> This is not an issue regardless the status of the partition. It likely happend during a lock switch. During clock switch there is a sequence of resets to re-synchronize the various part of L1Calo and L1topo is last, and subjected to a rather long resets: if the monitoring application try to read number (registers), the IPbus would not be responding due the on-going reset. Therefore the error. Topo input/output mismatches with CMX/CTP
- Topo busy and stopless removal at ramp down: https://atlasop.cern.ch/elisa/display/357928
- Everytime Topo RoIB is busy, TTC restart doesn't help, need to reload topo fw (and go to shutdown with the run before)
- HLT backpressure physics related or generated by a very hot tower (look for hot towers TOBs)
Look a the plot: "Available Cores seen by HLTSV" if the orange line is much lower than the green line its a problem with the HLT Supervisor (The DAQ oncall will ask the Run Control Shifter to Hold trigger, restart HLT/HLTSV-1, resume trigger"
- Sometimes topo get busy at beginning of the run. Solved with a TTC restart
Operation
Dead Times
Esplanation from Antoine, difference between physics dead time and edadtime displayed by CTP panel, they can be different: https://indico.cern.ch/event/708589/contributions/2909405/subcontributions/249519/attachments/1638435/2615082/L1CT.pdf
deadtime monitored by the Ctp and shown in the BusyPanel corresponds to the fraction of BCIDs in which a L1A would be vetoed, whatever it occurs or not
the Physics-deadtime (1 - Live-fraction) is the fraction of L1A which are vetoed they strongly depend on the Bunch-Group pattern, and might differ significantly with small number of bunches
https://twiki.cern.ch/twiki/bin/view/Atlas/LevelOneCentralTriggerDeadtimeSettings
CTP deadtime and project tag settings:
For STABLE BEAMS (data16_13TeV): Simple 7, Complex: b0=6/811 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=2/400 (PIX/IBL)
For beam commissioning (data16_comm): Simple 16, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)
For cosmics running (data16_cos): Simple 16, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)
For beam splashes (data16_1beam): Simple 2500 (LAr in 32 samples mode), Complex: b0=6/811 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=1/400 (PIX/IBL)
For high-rate running (data16_comm): Simple 6, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=7/260 (PIX/IBL) NB: WITHOUT Pixels
For high-rate (100 kHz) running (data16_comm): Simple 4, Complex: b0=15/370 (L1Calo ); b1=42/381 (TRT); b2=9/351 (LAr); b3=7/260 (PIX/IBL) NB: WITHOUT Pixels
For low-rate ID cosmics running (data16_comm): Simple 16, Complex: b0=15/370, b1=42/381, b2=9/351, b3=1/400; SMK: 2297, L1PSK : 6495, HLTPSK: 4830, BGK: 1003; NB:
ONLY WITH Pixels, SCT and TRT (using FastOR trigger); include also other sub-systems
DATABASES
L1Calo Databases https://twiki.cern.ch/twiki/bin/viewauth/Atlas/LevelOneCaloDatabases
OKS (with all hardware configurations)
- Rhys command (Oct 2020): from P1 PC:
dbe -f /atlas/oks/tdaq-09-02-01/l1calo/partitions/L1CaloStandalone.data.xml
To include/exclude any crate (e.g. if one PP crate dies and we need to exclude from the partition, eiher do unning the partition and commit and reload or do with dbe, navigate to the right partition, double click and type the name of excluded crates in "Disabled" and save (you need to know the exxact name you want to disable). Vice versa, to enable it, remove it from the disabled list (right click).
dbe -f /atlas/oks/tdaq-09-02-01/combined/partitions/L1CaloCombined.data.xml
e.g. see here the combined partition list:

e.g. see here the names of our crates:

e.g. see here the results for ATLAS partition:
here inside there are the combined partitions and LArgL1CaloCombined and TileL1CaloCombined
dbe -f /atlas/oks/tdaq-09-02-01/combined/partitions/ATLAS.data.xml
* https://atlasop.cern.ch/twiki/bin/view/Main/L1CaloOksDatabase
Two ways of updating a OKS file (that lives in SVN, so first need to download, modify and upload again).
Can be done via oks_data_editor filename.xml
it is better to use it because checks fir inconsistencies and avoid typos. (Oleg tile expert on that).
Martin, Bruce modify manually the xmls file.
To do it: in P1 create a tmp folder (and then you will erase after having done, to dont do mistakes next time, or remember to update it next time, before modifying with oks-update.sh
.
To download the interesting package:
- For example to change the address of the REM formware version :
oks-checkout.sh /atlas/oks/tdaq-06-01-01/l1calo/sw/l1calo_firmware.data.xml
it checks out in my tmp folder the package. I go inside, open the file and see there are two classes: ppm-rev6-default (to upload the fw from a file, used in the testrig) and ppm-rev6-flash (to upload the fw from a flash memory, used in P1).
So if we want to have a new REM FW version we can upload it in a non used slot (now we use REM version 60a00 in block 3, as defined from this line: "PpmFpgaProgram" "PpmRemFPGA-flash-block3"
. In the other blocks there are other fw versions older and not used.
<obj class="PpmFpgaProgram" id="PpmRemFPGA-flash-block2">
<attr name="BinaryName" type="string">"FlashMemory"</attr>
<attr name="Description" type="string">""</attr>
<attr name="Authors" type="string" num="0"></attr>
<attr name="HelpURL" type="string">"http://"</attr>
<attr name="VersionID" type="u32">0x60907</attr>
<attr name="CadProject" type="string">""</attr>
<attr name="CheckString" type="string">""</attr>
<attr name="Checksum" type="u32">0</attr>
<attr name="ChipType" type="string">"XCV1000E"</attr>
<attr name="FlashRamBlock" type="u32">0x2</attr>
<attr name="SourceURL" type="string">""</attr>
<attr name="ProgramType" type="enum">"RemFpgaDefault"</attr>
<attr name="DeviceName" type="enum">"RemFpga"</attr>
<rel name="Needs" num="0"></rel>
<rel name="BelongsTo">"FpgaConfiguration" "ppm-rev6-flash"</rel>
<rel name="Uses" num="0"></rel>
</obj>
<obj class="PpmFpgaProgram" id="PpmRemFPGA-flash-block3">
<attr name="BinaryName" type="string">"FlashMemory"</attr>
<attr name="Description" type="string">""</attr>
<attr name="Authors" type="string" num="0"></attr>
<attr name="HelpURL" type="string">"http://"</attr>
<attr name="VersionID" type="u32">0x60a00</attr>
<attr name="CadProject" type="string">""</attr>
<attr name="CheckString" type="string">""</attr>
<attr name="Checksum" type="u32">0</attr>
<attr name="ChipType" type="string">"XCV1000E"</attr>
<attr name="FlashRamBlock" type="u32">0x3</attr>
<attr name="SourceURL" type="string">""</attr>
<attr name="ProgramType" type="enum">"RemFpgaDefault"</attr>
<attr name="DeviceName" type="enum">"RemFpga"</attr>
<rel name="Needs" num="0"></rel>
<rel name="BelongsTo">"FpgaConfiguration" "ppm-rev6-flash"</rel>
<rel name="Uses" num="0"></rel>
</obj>
If we want the new one we will upload it with Jan's code in slot 4, and then wen we want to use it we have to change the fw version in the slot and the line here PpmFpgaProgram" "PpmRemFPGA-flash-block4"
Once is modified need to commit oks-commit.sh -u l1calo_firmware.data.xml -m "comments that goes in cvs"
with the comment, like in SVN.
Pay attention to the inconsitencies otherwise ATLAS cannot start!!!
- To disable a PPM , checkout the following file:
/atlas/oks/tdaq-06-01-02/hw/l1calo_crates.data.xml
then look for the ppm you want to disable (e.g. pp3-ppm0) and
COOL (with calibration stuff)
https://twiki.cern.ch/twiki/bin/view/Atlas/LevelOneCaloCoolDatabase
ACE config in P1
- How to configure the database opening:
- Full L1Calo chain cools database example :
- Example on how to open it : If on wants to see the db changing vs time:
Select the time, it will carge ALL the changings from tha time. If it is too much in te past better to restrict the selection to the interesting channels (tick Channel selection in the bottom part)
Other stuff
- Setting proxy to edit P1 twiki pages:
https://atlasop.cern.ch/twiki/bin/view/Main/EditTwikiFromAnywhere#ProxySetup
Mainteinance
HLT releases and merge requests
example of merge request: https://gitlab.cern.ch/atlas/athena/merge_requests/12888
mail from Kate: Let me explain a little about how the HLT release cycle works: the deadline for developments to be merged into a particular release is Friday evening. We let the nightly tests run over the weekend, and if everything looks good, we decide on Monday at the management meeting if we will build the release or not (sometimes we skip a week if there are no urgent changes). On Monday afternoon, we prepare to launch a reprocessing to validate the release candidate; i.e. the nightly from Sunday. On average, it takes about a week for the grid jobs to run and for the experts to check the output and sign off, and then we choose an appropriate time between fills to deploy the release. So, depending on when a merge request is made, unless it is an extreme emergency, it will generally take at least a week for the update to be deployed online.
Online Software
- Tile towers warnings: /atlas/moncfg/tdaq-06-01-01/combined/triggerMonitor/BeamStatesToIgnore.dat
Online Software
Isolation and Noise Cuts
So far noise cuts are only done for hadronic part. EM part kept at noise cut 4000. Work ongoing in Queen Mary to optimize them:
Birmingham works on isolation (taken by the firmare in CP). Optimizations performed in collaboration with egamma group
LAr recommissioning
Phase2
- Tile demonstrator installed in 2019 in LBA14
External Documentation
FELIX
ALTI
https://twiki.cern.ch/twiki/bin/viewauth/Atlas/LevelOneCentralTriggerALTI
Interesting talks
- L1Calo Joint meeting twiki; link
- Jan:
- Feb 2016 Joint: PPM HW/FW and Sat BCID: link
- Sept 2015 Joint: PPm HW/FW: link
- June 2014: Joint: HW/FW status: link
- 2016 high RR school, summary PPM: link
- 2016 KIP HW: Finding Sat BCID Thresholds: link
- 2016 KIP Analysis meeting: BCID L1calo: link
- 2014 Pedestal Corrections: link
- Claire
- Sebastian Feb 16: Mistimed BCID analysis: link
- Stanislav
- Feb 16: improvedLUT jets: link
- Oct 15: Miss Et, Jet calib link
- Nov 15 TGM: Et, jet calib: link
- Timing talks
- Fabrizio JM July 17: talk
- Thomas TBB calibration, 2020: talkKIP
- KIP meetings, PPM related talks
- Francesco BCID pulser runs: link
- Topo Talks
- Eduard ATLAS weekly Sept16: link
- Imma Oct16: link
ATLAS Software tutorial
is_ls -p ATLAS -n L1CaloStatus -R '.*cp3.*cpm06.*' -v -N
From Steve: The real source of inefficiency with high mu is, essentially, the high mu itself, which makes accurate measurement far more difficult due to both in-time and
out-of-time pile-up. high-pt activity in nearby* bunch-crossings. This is something that really explodes with high pile-up (approximately proportional to mu^2).
The MET turn-on curve is not just a function of the pile-up blurring the resolution, a large part of the resolution comes from the uncertainty of energy measurement for hadronic showers, which is clearly not so well measured at Level-1 as elsewhere. So any additional loss of resolution due to pile-up may be difficult to distinguish from the underlying (poor)
resolution that you'd obtain even with low mu.
For the pile-up part, I'm sure the resolution is not completely independent of mu, but the dependence is probably not as big as one might expect from looking at energy sums. This is due to the noise supression etc, where the intention is to set it at a level where the majority of pile-up fluctuations are removed. At the cost, of course, of removing some genuine energy.
I'm sure the same is true at HLT too, the pile-up dependence would be far worse without some cluster thresholding etc.
Pileup adds a random MET vector to the measurement of "true" MET (which has its own resolution, for example due to jet measurement that Steve refers to). There are two points to consider:
- The additional resolution due to pileup could be negligible when taken in quadrature with the true MET resolution.
- For large MET, the effect of the addition of a small random pileup vector depends on the random angle with the true MET and on average could be negligible.
LHC Stuff
Emittance scans
Heavy Ions
HI is rather different to proton running. You get many many low energy deposits, but little concentration of energies in a particular region, as is typical of jet or electron events.
Pulse shape Oscilloscope
Simulation
Overlay Simulation
Data Quality
Phase I
Power consumption
- PPM: (Martin) citing the PPM paper: "The power consumption was measured using a single, fully populated PP crate in a test lab setup that was operating all PPMs in the typical data processing mode, resulting in 175 A on the +3.3 V supply and 150 A on the +5.0 V supply. This corresponds to 84 W per PPM, which is well below the worst case estimate of 100 W.” So I guess from there, multiplying with 16 modules per crate, it would be about 1.4 kW per crate.
- eFEX: 8KW per shelf
L1Calo Upgrade documentation
TREX
CTP
Other Links
Run 3 stuff
Talks
Trouble shooting
- start calibration run and having error to start.daq -> try to start calbration partiton manually
-
-- SilviaFranchino - 2015-11-16