Path | Rate |
---|---|
HLT_ZeroBias | 20 Hz |
HLT_Random | 10 Hz |
HLT_Physics | 10 Hz |
HLT_IsoMu27(24,20) | 5 Hz |
HLT_Mu17_TrkIsoVVL_Mu8_TrkIsoVVL_DZ | 5 Hz |
HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL | 5 Hz |
HLT_ZeroBias_FirstCollisionAfterAbortGap | 10 Hz |
HLT_ZeroBias_IsolatedBunches | 10 Hz |
Path | Rate |
---|---|
HLT_HT300_Beamspot | 100 Hz |
HLT_HT450_Beamspot | 100 Hz (in the shadow) |
L1SingleEG (tbd) | all available | Electron stream | 120 kB | 100 Hz
)ALCARECO | Primary Dataset | Paths | ALCARECO rate | Data-taking phase | Notes |
---|---|---|---|---|---|
ALCALUMIPIXELS | ALCALUMIPIXELS | AlCa_LumiPixels_ZeroBias_v5 , AlCa_LumiPixels_Random_v2 | 1700 Hz, 400 Hz | Always | https://its.cern.ch/jira/browse/CMSHLT-1393![]() |
AlCaPCCZeroBias | ALCALUMIPIXELS | AlCa_LumiPixels_ZeroBias_v5 | 1700 Hz | Always | https://its.cern.ch/jira/browse/CMSHLT-1393![]() |
AlCaPCCRandom | ALCALUMIPIXELS | AlCa_LumiPixels_Random_v2 | 400 Hz | Always | https://its.cern.ch/jira/browse/CMSHLT-1393![]() |
BeamSpotObjects_byLS_031c09_TEST
, available in PREP database. Load it with:
process.GlobalTag.toGet.append( cms.PSet( record = cms.string("BeamSpotObjectsRcd"), tag = cms.string("BeamSpotObjects_byLS_031c09_TEST"), connect = cms.string("frontier://FrontierPrep/CMS_CONDITIONS") ) )Then, just add this line to the of your hlt.py file:
hltOnlineBeamSpot = cms.EDProducer("BeamSpotProducer")
; This will overwrite the beamspot producer and let the HLT use the beamspot from the Conditions rather than the one from the SCAL in the data stream.
Hi Giovanni, the module is the PoolDBESSource, and this feature is activated by setting: DumpStat = cms.untracked.bool(True) In cmsDriver, this translates into: --customise_commands='process.GlobalTag.DumpStat = cms.untracked.bool(True) There is also another couple of modules that can help you: process.escontent = cms.EDAnalyzer( "PrintEventSetupContent", compact = cms.untracked.bool( True ), printProviders = cms.untracked.bool( True ) ) process.esretrieval = cms.EDAnalyzer( "PrintEventSetupDataRetrieval", printProviders = cms.untracked.bool( True ) ) You can put them in the end path in order to inspect the ES contents: process.esout = cms.EndPath(process.escontent + process.esretrieval) if process.schedule_() is not None: process.schedule_().append(process.esout) for name, module in process.es_sources_().iteritems(): print "ESModules> provider:%s '%s'" % (name, module.type_()) for name, module in process.es_producers_().iteritems(): print "ESModules> provider:%s '%s'" % (name, module.type_()) This was set-up by Andrea, Javier and I some time ago. HTH S.
If you want the full explanation of the ECAL alignment procedure, I would suggest to have a look at the DN 2015/026 [1]. Old, but still valid in terms of methodology. The problem now is that we opened the Endcaps during EYETS2016, as we did in EYTS2015. From last year experience the Barrel is not affected, and that is why we think that also this year the Barrel startup conditions that we put online are a good approximation of what we will measure after ~500/pb of data. For the Endcap, since it was opened, the business remains: when you open and close, you are not 100% sure you will end up in the same position, due to mechanical stress. The alignment there is very needed. As a proxy to understand what is the level of disagreement we expect, we can take the difference between the tag at the end of 2015 and the aligned tag of 2016 (what I called "after we aligned in July 2016"). However, to be more conservative, we can use as a proxy the tag of 2016 itself, meaning the difference between the ideal position and the "2016 position". It seemed to me we were in a hurry to have an estimation of possible effects on objects reconstruction at HLT, and this is what we could provide in few hours promptly. If you need for example the difference between 2015 and 2016, it may take some more time (also given the holiday period ...). However, we are talking about order of magnitude here, in order to test the effect on HLT you need to have the bulk effect of the mis-alignment, you cannot optimize any path selections on a misaligned scenario (you can only relax cuts not to be affected). In addition, we changed pixels too this year ...I wouldn't go too tight on the selections (but I have not been following pixel/tracker operations in P5). Maybe one of the confusions here is that ECAL is an almost rigid body (differently from the tracker), and while for tracker one effect is the relative alignment of the layers, then it can be mimicked by a smearing, for ECAL the effect is a systematic bias due to a shift. I hope this explains and sheds some light on the procedure and why I think the "data 2016 tag" is a good proxy of "how much we don't know at startup" in terms of position. One additional thing, we will probably have sooner the Preshower alignment (since it relies on simple tracks) and this could give us a new "order of magnitude" about how much the position changed between 2015 and 2016.
As the first large production of MC samples for 81x is being finalized [1], we've realized that one upgraded pixel condition, the Pixel FED cabling map, (as included in the presently available and validated phase-I MC Global Tag) doesn't reflect the state-of-the-art of the knowledge of the detector being assembled. A newer more realistic version has been made available but unfortunately not included in time to be validated for the (long awaited) 810 release. Now, since we don't have contingency to re-validate the new conditions, it seems likely that GEN-SIM-RAW samples with this version of the conditions will be produced. This - per se - is not a problem, since the same map is consistently applied back and forth in the digi2raw and raw2digi steps, so within a given campaign the effect is null. Nonetheless, across campaigns and across releases, one might need to be careful with digitization of these RAW samples, since we foresee to move soon in 90x to use the newer more updated version. Having this being said, I don't know what plans the TSG group has with the RAW data produced now with 810, but you should keep in mind that if TSG plans to re-digitize them later on, we'll need a compatibility Global Tag, very much as we did this year for the 76x to 80x transition. Since the 81X samples are not yet injected, I am not yet asking you to build the compatibility queue and advertise the situation at TSG coordination meeting, but please keep this heads-up in mind. We'll keep you in the loop if/when further actions will be needed. [1] https://hypernews.cern.ch/HyperNews/CMS/get/prep-ops/3475.html One additional note: it may be very well possible the L1T content will change in 80x with respect to 90x. We'll find this out in the coming weeks, once 90x will get closer to closing. If that'll be the case, there'll be a second and easier-to-remember reason why the compatibility queue Marco refers to will be needed.
git cms-addpkg CondTools/HLT git clone git@github.com:cms-AlCaDB/AlCaTools.git hltGetConfiguration /dev/CMSSW_9_2_0/GRun/V131 > hlt.py Get l1prescales.xml from TSG toolsThiagoTomei - 2017-12-15
I | Attachment | History | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|---|
![]() |
alcadb_subscription_service_presentation.pdf | r1 | manage | 2206.8 K | 2017-12-15 - 17:23 | ThiagoTomei | How to subscribe to AlCa-DB changes |