To ensure the validity and verify the physics performance of the trigger software.
Application to release and production validation.
Automation of the verification procedure.
hltvalidation_fastsim, included in relval for fastsim
hltvalidation_preprod, included in mc pre-production
hltvalidation_prod, inlcuded in mc full production
In addition, the dqm offline sequence has been recently incorporated, see details.
The validation code which can be run on data is simply transferred to the trigger offline or online dqm (as appropriate depending eg on event content requirements).
Tags
Currently release integration testing instructions and list of tags are kept in a central location, replacing their former home.
Automation
In the past performance checks of the code for release validation had been performed by hand by the various trigger validation expert contacts.
The goal here pursued aims at rendering the procedure as automatic as feasible.
This included the production of the validation results simultaneously with the production of the samples, and the automatic detection and report of performance discrepancies.
We schedule the task in three integration stages, to be followed by the subsystems.
Stage-i :Sources
The first step is the development of the validation modules and their inclusion in the production workflow(s).
The validation sequences are defined in HLTValidation which are included in the global automation.
At this stage validation code is developed, and existing analysis modules adapted as edm analyzers made compatible with the DQM framework. The results of the anayses are stored only in the form of dqm objects (ie monitoring elements) for processing in the following stage. The validation module is executed as part of the sample production workflow and the results become stored in the edm file.
Stage-ii :Harvesting
The second step is the collation of the results from the mc data files and computation of global quantities, namely trigger efficiencies.
The hlt harvesting sequences are defined in HLTValidationHarvest, which are included in the global automation.
The implementation of this stage occurs as a separate analyzer wrt the code which produced the validation sources. For simpler applications a generic implementation can be adopted. An example configuration is available based on the generic client, aka postprocessor.
This supports in particular the computation of efficiency turn-on curves and dependencies and the computation of corresponding global efficiency values has also been added to serve the required purposes.
Please take note that for avoiding standard histogram rescaling when overlaying to references in the dqm gui, it is recommended that efficiencies be passed as tprofile objects. Note root's tgraph(asymmerror) objects are not currently implemented in dqm's core; consider this advice concerning uncertainties.
Stage-iii :Publishing
The third step deals with release comparison, detection of discrepancies through the application of quality tests, publishing of the results.
The results are finally displayed in the offline dqm gui following production.
Discrepancies among the releases being compared may be detected by applying quality tests.
These are specified in the sequence , with example xml and tester files available. These should be implemented for each subsystem.
See also generic examples and description.
Discrepancies amongst releases may be quantified via statistics tests and highlighted in a webpage, making use of the RelMon tool.
A small collection of most significant histograms O(5-10) should be defined, and highlighted in the gui via layouts, to provide a single glance quick summary of the performance of each trigger subsystem.
This subset, which coincide with the summary plots referred below, are specified in the shift layout.
In addition, the subsystem histograms which will constitute the regular validation reports are to be organized through the corresponding layout.
Layouts allow a brief description of each plot.
Further detailed explanations alongside a reference of the currently expected behavior is provided in an external page, to be linked therefrom. These documents provide a performance summary demonstrating the current performance of the involved triggers; features observed and improvements being pursued are noted therein.
The visualization of the histograms in the gui may be tailored and improved through the implementation of dqm render plugins.
Subsystem results for validation reports may be linked from the gui (see instructions), replacing the older-fashioned macro-generated pdf-format gathering of plots.
Features of this step can be suitably tested in a private installation of the dqm gui; instructions are provided, and useful scripts are also in lxplus:~nuno/public/validation/dqmgui.
Up-to-date recipes and detailed guidelines are provided below.
Requirements for integration
Each trigger subsystem provides a sequence for each of the stages, to be integrated in the validation workflow.
The validation procedure is based on the dqm framework. The validation code needs therefore to be made dqm compatible.
The generation of printout, log files, root files, and other auxiliary output should be turned off in the default settings. Specifically, cout's are clearly forbidden, relevant messages should be directed to message logger (see eg)
Developers are asked to include a configuration parameter in their analyzers specifying the parent dqm folder to contain the respective validation results. This should be set to the current default: HLT/MyHLTSubsystem_Val/ ; allowing identification from dqm offline sources (which may be similarly affixed with _Offline).
There are dqm restrictions in number of bins, and variable (automatic) histogram binning is forbidden
Filters are forbidden in the validation sequences (suitable workarounds may be considered, as the use of compatible framework modules to manipulate collections).
In order to be executed centrally, the validation code is required to be part of the release being validated. For this reason, dedicated tests are mandatory before changes can be integrated.
The current testing procedure is described (it used to be provided).
Recipe for preparing layouts, references, QTs... and more
This section gathers instructions aiding developers establishing the final steps, ie stage-iii, of the procedure.
In case of difficulties with the following instructions please contact J.Klukas so that they can be updated or improved.
In order to test out layouts, render plugins, reference histograms, you will need to setup your own personal copy of the DQM GUI. You can set this up from any machine, but these instructions will assume that you using an lxplus machine (just be sure to close down the GUI when you're done). If you are accessing a GUI on lxplus from outside the cern.ch domain, you must use a proxy so that your browser can access the lxplus machine. You must change your browser settings as described on the OnlineDQMTestBed page, and create a tunnel by running the following command in a terminal window from the same machine where you'll be running your browser (modifying to fit your username and the desired lxplus machine):
It will ask for your password, and then hang. This is good; the tunnel has been established and will stay open as long as this terminal window stays open.
Now, to set up a testbed, you will need to follow sections of the documentation on the DQMTest page, but use the list below as a guide for which sections to run for our purposes:
Run instructions for "Installing the GUI server", with the following changes:
Be sure to modify the very first line of code (you only need to run this once, although it appears in every section) to point to /tmp rather than /build, generating something like this: DQMV=5.0.0 ARCH=slc4_ia32_gcc345 DEV_AREA=/tmp/$USER/dqmtest DATA_AREA=/tmp/$USER/dqmdata
Also, the default instructions have you check out from CVS anonymously, so if you would like to be able to commit the changes you make later on, be sure to leave out the following command: export CVSROOT=:pserver:anonymous@cmscvs.cern.ch:/cvs/CMSSW
For the cvs co commands, you will probably want to use the tag OFFLINE_CONFIG_PRODUCTION rather than ONLINE_CONFIG_INTEGRATION.
These commands fetch a large number of files, so it may take a few minutes to complete.
Run instructions for "Starting the GUI server", remembering:
You must ignore the first line, since you've already set up these variables to point to /tmp. Changing these will break the instructions.
Run instructions for Setting up the collector
Copy a file of interest to the $DATA_AREA
This must be a harvested output file with a name following the DQM conventions (DQM_V*_R*_DbsPathOfDataset)
For testing, you can copy one of the official harvested CMS.RelVal files from castor:
A list of the files harvested by the DQM group can be found online: https://cmsweb.cern.ch/dqm/offline/data/browse/ROOT
Run instructions for Registering DQM output files to the web server
Again, run instructions for Starting the GUI server, which causes a restart so the GUI can recognize the new file
Again, run instructions for Setting up the collector, to complete the restart
Now, you are ready to browse through your stand-alone GUI. The step of starting the GUI server should have generated a message on your terminal with a link to the location of the server. Access that page (in the browser where you've set up a proxy if you're outside of CERN), and you have access to your private GUI. To learn how to access the dataset you uploaded and navigate around, take a look at the Central GUI section of this page.
In case of difficulties with the following instructions please contact J.Klukas so that they can be updated or improved.
To enable the CMS.RelVal layouts, you must modify $DEV_AREA/config/server-conf-devtest.py to load hlt_relval-layouts.py and shift_hlt_relval-layouts.py. Add "hlt_relval" to the two LAYOUT definitions, like this:
LAYOUTS = ["%s/%s-layouts.py" % (CONFIGDIR, x) for x in
("csc", "dt", "eb", "ee", "hcal", "hlt", "hlx", "l1t", "l1temulator", "rpc", "pixel", "sistrip", "hlt_relval")]
LAYOUTS += ["%s/shift_%s_layout.py" % (CONFIGDIR, x) for x in
("csc", "dt", "eb", "ee", "hcal", "hlt", "hlx", "l1t", "l1temulator", "rpc", "pixel", "sistrip" , "fed", "hlt_relval" )]
Again, follow the instructions from DQMTest for "Starting the GUI server" and "setting up the collector", so that this change is picked up.
Now, you can modify shift_hlt_relval_layouts.py to define a short (5-10 histogram) layout for your subsystem. The changes should automatically be reflected in the GUI whenever you save. To get a sense of how to write these layouts, take a look at the EGAMMA section of the file, then navigate to HLT/HLTEgammaValidation/Zee Preselection/doubleEle5SWL1R to see the output that this code produces. When you click on the plots, text should appear in the floating "Description" box, corresponding to the content defined in the python file.
Example:
muonPath = "HLT/Muon/Distributions/HLT_IsoMu3/"
muonDocumentation = " (HLT_IsoMu3 path) (<a href=\"https://twiki.cern.ch/twiki/bin/view/CMS/MuonHLTOfflinePerformance\">documentation</a>)"
def trigvalmuon(i, p, *rows): i["00 Shift/HLT/Muon/" + p] = DQMItem(layout=rows)
trigvalmuon(dqmitems, "Efficiency of L1",
[{'path': muonPath + "genEffEta_L1",
'description': "Efficiency to find an L1 muon associated to a generated muon vs. eta" + muonDocumentation}])
If you are happy with your layouts, you can commit the changes directly from this folder, as long as you did not check out the files from CVS anonymously. All CMS.RelVal developers should already have been given developer access for the DQM/Integration package. Once you've tested and committed your changes, make a new tag and contact Nuno Leonardo to announce it. Once the new tag is picked up, your layouts will be visible in the Offline GUI.
In case of difficulties with the following instructions please contact J.Klukas so that they can be updated or improved.
Some quality tests simply compare against numerical thresholds (like ContentsWithinExpected), but others compare against reference histograms (like Comp2RefChi2) which must be included in your harvested file. To try out embedding such reference histograms, you create a configuration by using the following cmsDriver command (changing the CMS.GlobalTag as needed):
To this, you should add the following content, replacing the workflow and fileNames with those of a GEN-SIM-RECO dataset, and changing the referenceFileName to the file you wish to use for reference histograms:
When you run this configuration, the output file should include a top-level folder name "Reference", which will include the same tree of plots as the "DQMData" folder, but taken from the reference file. Now, you can use this configuration as a starting point for including comparison-based quality tests.
Take a note also of ongoing developments for db-based references (see eg pdf).
In case of difficulties with the following instructions please contact J.Klukas so that they can be updated or improved.
To define DQMQualityTests, you must add the QualityTester module in your path (in the same configuration as you used to define reference histograms), and point it to an xml file containing definitions of quality tests you want to run. You should consult the DQMQualityTests page for documentation, and look at an example xml file in CVS, or search for other examples. After you create your file, be sure to point the qTester module to it through the FileInPath parameter.
You can now run your updated configuration, and browse through the output file. The QualityTester module will dump strings into the folders alongside any histograms it checked. Consult the DQMQualityTests page for information on using C++ code to read these strings. Also, on the GUI, you can click on the drop-down menu for alarms to view only histograms which failed a quality test.
Integrating Quality Tests Into Automated Production
To include your quality tests in the automated production, you must place your xml file in your validation module's data area (example from muon HLT) and place the QualityTester configuration in your python area (example for muon HLT). To include these in the automatic validation, you must add your QualityTester module to the central QT validation sequence. Once you've tested everything, contact Nuno Leonardo and point him to your updated version of the central QT validation sequence so that he can commit the change.
In case of difficulties with the following instructions please contact Z.Gecse so that they can be updated or improved.
General rendering
For modifying how histograms get displayed in the gui, one may define what's referred to as 'dqm render plugins'. See CMS.DQMGuiPlugins. In such module one can set root-style aspects such as color for line/marker/fill, log scales, and so on. See a nice example from heavy flavor HLT.
The regular validation reports, which have nominally been produced by-hand in the past, based on the harvested relval results, should be generated automatically, too. This is achieved by defining a report intended layout. The form of the report, previously a hand-generated pdf, is to be replaced by a link to the subsystem layout in the offline dqm gui.
Reports can be pointed from the gui though url:
single plot, where a problem as been spotted (ex.)
release report, corresponding to the layout-defined selection for the release (ex.)
Gui url instructions are available in CMS.DQMGuiStart.
Examples: