FASER Tracker Monitoring Scripts
Installation
Python3 is required for running these scripts. Instructions found on the
official python website
. If you have difficulty setting up the python path, check
here
. If python3 is already on your lxplus account, you can skip this step.
Log in to your lxplus account through ssh.
ssh -YC USERNAME@lxplus.cern.ch
Create a directory called "MonitoringScripts" and cd into it. Run the following.
git clone https://gitlab.cern.ch/faser/tracker-commissioning.git
cd tracker-commissioning/csv-reader
git checkout DCS-monitoring
source ./centosinstall.sh
I recommend adding the following aliases to ~/.bashrc:
alias Assembler="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/Assembler.py"
alias CSVreader="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/CSVreader.py"
alias IVreader="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/IVreader.py"
alias Cleanup="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/CSVCleanup.py"
alias thermtable="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/ThermalTableGen.py"
alias Converter="python3 path/to/MonitoringScripts/tracker-commissioning/csv-reader/Converter.py"
Getting & Storing Data
B161/Prototype Plane
Data is extracted by running influx on your personal pcfaserdcsdev account. Running influx puts the output file in the directory in which influx was run, so I recommend changing directory to where you want the file to be. In the following script, replace "X" with the id number of the plane being tested. "DDMMYY" is the date of the test. "TESTTYPE" should be descriptive. Standard tests include: "longterm","ivscan_8chlvoff","ivscan_8chlvon","ivscan_2chlvoff","ivscan_2chlvon". Change the date-time strings following "time >=" to reflect the timespan containing the test of interest. NOTE: For B161-type data only, the input times are in UTC not CET. (2 hours behind)
ssh -YC CERNUSERNAME@pcfaserdcsdev.cern.ch
influx --ssl --username admin --password faserdcs --host dbod-gb026 --port 8080 -unsafeSsl --precision rfc3339 --database="faserdcs" -execute 'select * from _NGA_G_EVENT.EVENT WHERE time >= '\''2020-01-01T00:00:00Z'\'' AND time <= '\''2020-01-02T00:00:00Z'\'' ' -format 'csv' > b161_planeX_DDMMYY_TESTTYPE.csv
EHN1
Data is extracted by running getdata.sh on your personal faser-dcs-001 account.
ssh -YC CERNUSERNAME@faser-dcs-001.cern.ch
source /data/getdata.sh
This script will show some GUI that you can use to pick the timeframe of interest. This will produce two txt files - one starting with "dataPS" and the other "dataTIM". They should have the same name otherwise. These files are stored in /tmp/ . You should move the txt files to the appropriate ehn1 cernbox directory and delete them from tmp when finished. If there is an extra part of the file name after the TIMESTAMP_TO_TIMESTAMP part, rename it such that everything after the 2nd timestamp is gone. Example script to run:
mv /tmp/data*.txt ~/cernbox/ehn1/DCS_Data/Path_to_txt_files
cd Path/to/txt/files
for f in data*_Produced*.txt; do keeper="$(cut -d'_' -f1-4 <<<"$f")"; mv $f $keeper.txt; done
Since my scripts only take CSV files, we need to clean these up further. Go to where you installed the scripts (I assume lxplus) and run Cleanup.
ssh -YC USERNAME@lxplus.cern.ch
cd Path/to/txt/files
Cleanup
Select "Cleanup EHN1 file". You will be prompted for a txt file. It does not matter which txt file you choose; the script will find its partner. It will then produce a csv file with the same timestamps. Use this CSV file for any additional analysis. If the file will be analyzed as an IV test, it accordingly by appending "ivscan_8chlvoff","ivscan_8chlvon","ivscan_2chlvoff", or "ivscan_2chlvon" to the csv file name.
EHN1 8ch data only
If you need to analyze data from 8channel tests done in EHN1, the procedure is a little more involved because the HV data will be acquired through the B161 procedure but everything else is through the EHN1 procedure. This produces 2 csv files that need to be converted and combined. To that end, I made "Converter".
An example of running Converter to merge two files:
Converter -m -i b161_FILENAME1 EHN1_FILENAME2
This will produce a file starting with "EHN1_conv" that can then be used for all other scripts.
TI12
Identical to EHN1 except data is extracted using faser-dcs-003.
ssh -YC CERNUSERNAME@faser-dcs-003.cern.ch
source /localdisk/launchers/getdata.sh
dataPS/TIM .txt files should be stored in: /eos/project-f/faser-commissioning/tracker/ti12/DCS_Data/
TI12 files and generated plots should be stored in: /eos/project-f/faser-commissioning/tracker/ti12/DCS_Results/
CSVreader
As of 15/3/2021, CSVreader can be run from the command line. A csv file named "LAB_TESTNAME.csv" can be run as follows:
python CSVreader.py -i LAB_TESTNAME
As many files as desired can be appended. The --help argument produces the following:
usage: CSVreader.py [-h] [-v] [-l LAB [LAB ...]] [-i ITEMS [ITEMS ...]] [-s]
[-p PLOTS [PLOTS ...]]
EventLooper
optional arguments:
-h, --help show this help message and exit
-v, --visual Enable plots shown and editable.
-l LAB [LAB ...], --lab LAB [LAB ...]
determine which lab configuration. EHN1, B161,
TI12,EHN1v2.List must equal length of items or be
empty.
-i ITEMS [ITEMS ...], --items ITEMS [ITEMS ...]
-s, --startup Uses old UI instead
-p PLOTS [PLOTS ...], --plots PLOTS [PLOTS ...]
Choose plots to generate. 0:all 1:Humidity 2:SCT
3:FrameTemp 4:HV Current 5:LV Current 6:HV Voltage
7:LV Voltage 8:Norm Leakage Current 9:Dew Point
IVreader
IV scans measure the dependence of Leakage Current (LI) on voltage (V). Leakage current is temperature dependent, so it is often normalized to the average temperature of the patch panel. Normalized IV plots are part of the standard tracker commissioning procedure. IV scans are typically about 8-10 minutes long. IV tests on single planes usually include an 8-channel (8ch) configuration, which measures LI on each module, and 2-channel (2ch) configuration, which measures LI across the entire patch panel. For each configuration, there is also a test with low voltage to the plane being "off" (LV off) and "LV on", in which the plane receives a LV power supply that is configured using tcalib mask scans.
Standard commissioning procedure is usually interested in the following plots, all normalized to temperature unless stated otehrwise:
- 2ch LV on vs LV off (one plot including both patch panels in both LV settings; 4 curves total)
- 8ch LV on vs LV off (one plot per module; 2 curves per plot)
- QA comparison (comparing 8channel, LV off IV data taken in lab to the first IV scans done with individual modules; one plot per module, 2 curves for each plot)
- Summation tests (Summing LI of all modules in one patch panel compared to LI of the patch panel (2ch data))
IVreader accepts a CSV file generated as described in the first section.
As of 5/7/2021, CSVreader can be run from the command line. Assuming IVreader.py is assigned to alias "IVreader", a csv file named "LAB_TESTNAME.csv" can be run as follows to plot all IV curves contained in the file:
IVreader -i LAB_TESTNAME
This script can also generate all standard commissioning plots. An example is the following, run from a directory containing all named files:
IVreader -v -ac -i EHN1_2021_0628-11h15_0628-11h30_2chlvoff EHN1_2021_0628-11h35_0628-11h45_2chlvon EHN1_conv_2021_0628-10h37_0628-10h42_8chlvoff EHN1_conv_2021_0628-10h52_0628-10h56_8chlvon -qa mod0_20220040200396.csv mod1_20220330200310.csv mod2_20220040200462.csv mod3_20220380200015.csv mod4_20220380200122.csv mod5_20220040200512.csv mod6_20220170200160.csv mod7_20220170200506.csv
Using "IVreader --help" brings up the following:
optional arguments:
-h, --help show this help message and exit
-v, --visual Enable plots shown and editable.
-l LAB [LAB ...], --lab LAB [LAB ...]
determine which lab configuration. EHN1, B161,
TI12,EHN1v2.List must equal length of items or be
empty.
-i ITEMS [ITEMS ...], --items ITEMS [ITEMS ...]
-s, --startup Uses old UI instead
-n, --normalize Normalize IV curve with SCT temperatures.
-qa QAFILES [QAFILES ...], --qafiles QAFILES [QAFILES ...]
QA files.
-aa, --autoapprove Skip dot-checking; assume plots correct
-lv, --lvtest LV on vs LV off test
-qc, --qacompare Match up to qa files given
-nc, --normcompare Compare normalized result to unnormalized result
-sc, --sumcompare Compare sum of 8chan to 2chan
-ac, --allcom Do all commissioning tests for 8chan and 2chan.
Requires items to be listed as:
2chLVoff,2chLVon,8chlvOff,8chlvON and 8 qa files
Acquiring QA Data
All QA data is contained in
/eos/project-f/faser-commissioning/tracker/moduleQA under subdirectories with module names. These can be copied to a working directory. The following is an example with module numbers from Plane 15:
for i in 20220170200276 20220380200017 20220380200129 20220040200360 20220040200025 20220170200098 20220170200204 20220040200299 ; do scp ${i}/${i}_hv_scan.csv path/to/WORKING_DIRECTORY; done
In the working directory after extracting the QA data, you can rename them with module labels like this:
for i in 20220170200276,0 20220380200017,1 20220380200129,2 20220040200360,3 20220040200025,4 20220170200098,5 20220170200204,6 20220040200299,7; do IFS=","; set -- $i; mv $1_hv_scan.csv mod$2_$1.csv; done
Make sure to change the module numbers to the appropriate ones and check that they are correctly mapped to Mod0-8 in the commissioning spreadsheet.
Comparator
This is a new addition that needs to be tested. It is most easily run by first setting up a directory of directories. The directories found here should be the ones generated using CSVreader for either ehn1 or b161 files. EHN1 files may contain data about multiple stations (for instance,
S1L0,S1L1,S1L2,S2L0). For each station-layer contained in the EHN1 file, there should be a b161 directory. (In total, for this example, there would be 5 directories - one EHN1 directory and four B161 directories). If you are testing, you can optionally feed the script the same B161 directory multiple times.
- 1. Go to your test directory OR create an empty directory.
- 2. scp an ehn1 csv file to the directory you made/chose in 1.
- 3. scp b161 csv file(s) to the directory you made/chose in 1.
- 4. Run Assembler > General CSV Analysis > Plot All Measurements on all files from steps 2 and 3. If the script is allowed to complete successfully, you should now have directories containing images.
- 5. Run Assembler > Comparator. When it requests directory names, give it the name (NOT PATH) of the directories generated in step 4 for each lab as prompted. It will accept 1 EHN1 file, and then 1 B161 file for each station-layer active in the EHN1 file.
- 6. A comparison pdf is generated. Check that plots are paired up as expected.
Refer to the commissioning spreadsheets for each lab to determine which Station-layer corresponds to which plane, and what the monitoring data at the time was called. (TO BE CONFIRMED:)
St0L0:Plane1,
St0L1:Plane2... etc.
COMPARATOR
For comparing b161 results to EHN1 results. Assumes you are
running script from directory containing CSVreader-generated
directories from commissioning procedure.
Enter EHN1 directory.
File name input:
Using this utility requires a continuous data set (.csv file made by Cleanup or influx) in which the system is in ALL LV testing states: off, on-unconfigured, on-configured as described in the general commissioning procedure.
- 1. Go to the directory containing your .csv file.
- 2. Run Assembler and choose ThermalTableGen from the menu OR run ThermalTableGen directly using the 'thermtable' alias.
- 3. A file will be requested. Input the name of your .csv file.
- 4. A series of SCT plots will appear, one for every active plane during the run (usually 3 in EHN1). Left-click the 3 times desired on the plot (one for each LV state; 3 clicks for each plot shown). Visual feedback is not implemented at this time, but there should be text output showing the timestamp selected by your click.
- 5. After each plot has been clicked 3 times, the script will generate one CSV file per plot shown in the directory that the script was ran. They will have the name of the file appended with the SCT plot name, plane label (eg S0L1), and _thermaltest.
- 6. Import the CSV file into a spreadsheet program of your choice. Google sheets is most efficient, but something like Excel will also work.
- 7. Check that the values shown are in the correct location and are expected quantities.
- 8. Copy and paste the contents of the table into the appropriate section of the commissioning spreadsheet.
Cleanup
This script has two options: you can create a new csv file from an old csv file but with a new time interval OR you can give it dataTIM/PS generated by getdata.sh to yield an EHN1/TI12 file.
All files should have the lab name in them (B161, TI12, EHN1).
For cleaning up a txt file:
When using getdata.sh, it will produce two .txt files stored in /tmp (see EHN1 procedure). The _ProducedOn... tag must be removed from the file name before doing anything with it as it causes formatting errors.
When the script requests a file, input ONE of the .txt file names (Example: dataPS_2021.3.17-10h45_TO12_2021.3.17-11h15.txt)
It will say when the cleanup is done. Check results using CSVreader.
Utility Files
CONFIG.py - The other script uses this file to determine labels for HV current and voltage, since Module number and MPOD channel number do not always agree. This file can be edited as long as the format is maintained. You should save the default config file under a different name if you wish to make changes. At this time, the script only looks for CONFIG.txt. This should be kept in mind when comparing multiple files.
ReaderTools - Location of class and general tools for the scripts.
Known Issues
Duplicated data(EHN1 only)
Sometimes when pulling data from getdata.sh, the TIM txt file is populated with extra copies of the same set of data. Currently, there is no script to clean this up. Fortunately, this copying happens in blocks, so cleanup by hand is relatively straightforward. This means that the duplicated data is not mixed in with itself. So, for instance, if we want the data 'ABC' but the textfile contains 'ABCABCABC'. we chop off the extra two ABCs since they are redundant and mess things up.
A handy tool for finding where to chop is grep.
grep textkeyhere textfilename.txt
This will show you how many times the data is duplicated. It's handy to use an entire line of data for 'textkeyhere' because in normal data, it would never have another copy in the whole file.
When you find the location of where the second set of copied data starts, delete everything after that line however you would do it in your favorite text editor.
Legacy Script Instructions
Installation
GitLab file location
Latest stable release available under the master branch. Most recent features are under
DCS-monitoring branch.
Windows
0.
1. Click the Download button to the left of "Clone". You will be given a choice of types of files to download to. .Zip is fine. This download file contains all commissioning scripts.
2. Extract the files from the zip file. The extracted files are called tracker-commissioning-master by default. In your extracted file, find the csv-reader folder under: \tracker-commissioning-master\csv-reader.
3. Optional: Move the files inside csv-reader to the directory containing your FSM monitoring data. This makes entering path names to the script easier. Otherwise, the location of these files does not matter. If you use the full path name, the script will accept it.
Linux
1. gitclone or otherwise download the files to the directory of your choosing.
2. Run centosinstall.sh:
chmod +x centosinstall.sh
./centosinstall.sh
centosinstall should download and install all necessary packages to run the script.
Scripts can be run from python IDLE or from the command line when in the installation directory.
python3 Assembler.py
B161 Procedure
Data is extracted by running influx on your personal pcfaserdcsdev account. Running influx puts the output file in the directory in which influx was run.
ssh -YC CERNUSERNAME@pcfaserdcsdev.cern.ch
influx --ssl --username admin --password faserdcs --host dbod-gb026 --port 8080 -unsafeSsl --precision rfc3339 --database="faserdcs" -execute 'select * from _NGA_G_EVENT.EVENT WHERE time >= '\''2020-01-01T00:00:00Z'\'' AND time <= '\''2020-01-02T00:00:00Z'\'' ' -format 'csv' > OUTFILE_NAME.csv
To analyze data, generally you will only need one option of many possible in Assembler.
To run Assembler, ssh into the shifter account.
ssh -YC shifter@shifter@sahigashlinux.cern.ch
cd cernbox/b161/Monitoring_IV_data/PlaneX/TestY/
Assembler
Choose "General CSV analysis" and then "Plot all measurements". The program will request the csv file you extracted using influx. You may optionally process more than one file at a time, but this generally isn't needed. Plots will appear. They must be closed to continue producing plots. Plots are automatically saved as they first appear. You may zoom in on plots and save them, either overwriting or as their own file.
IV tests
Follow the same procedure but choose IV Plots. There is now a menu option specifically for B161 comissioning tests. It will ask you for 4 files and QAfiles. When using the Basic Plotter, QAfiles are optional.
EHN1 Procedure
Data is extracted by running getdata.sh on your personal faser-dcs-001 account.
ssh -YC CERNUSERNAME@faser-dcs-001.cern.ch
source /data/getdata.sh
This script will show some GUI that you can use to pick the timeframe of interest. This will produce two txt files - one starting with "dataPS" and the other "dataTIM". They should have the same name otherwise. These files are stored in /tmp/ . You should move the txt files to the appropriate ehn1 cernbox directory and delete them from tmp when finished. If there is an extra part of the file name after the TIMESTAMP_TO_TIMESTAMP part, rename it such that everything after the 2nd timestamp is gone.
Since Assembler only takes CSV files, we need to clean these up a little. Switch to the EHN1 shifter computer account and run Assembler.
ssh -YC shifter@sahigashlinux.cern.ch
cd ~/cernbox/ehn1/DCS_Data/Path_to_txt_files
Assembler
Choose "CSV Cleanup". (Optionally just use alias Cleanup instead of Assembler). Select "Cleanup EHN1 file". You will be prompted for a txt file. It does not matter which txt file you choose; the script will find the other. It will then produce a csv file with the same timestamps. Use this file to run Assembler as before.
Assembler
On run, you should get:
============================
ASSEMBLER
============================
Choose what you are doing:
Main Menu
Choose from the list:
1 > General CSV analysis and PDF generation
2 > IV Plots
3 > Thermal Table Generator
4 > CSV Cleanup
5. Comparator
Enter a number.
Following the dialogue, you will be able to type anything. The dialogue will repeat until you enter a valid entry, which will be an integer corresponding to the menu item as shown. Choosing one of the menu items will take you to the corresponding UI.
CSVReader
The following instructions apply to the script before the cmd line update.
Arrive here by choosing option 2 in the Assembler menu or by directly running CSVreader.py. Running CSVreader.py directly skips PDF generation.
===================================
PANEL TEST CSV READER
============== v.12 ===============
Note that all plots are generated for a file regardless if
that information was relevant to the test associated with that file.
MAIN MENU
Choose from the list:
1 > Plot all measurements
2 > Plot Single Measurement
3 > Custom plotter
Enter a number.
- Plots a standard set of data in a CSV file (Leakage Current, HV Voltage, ICC/IDD, VCC/VDD, NTC temperatures, MPOD temperature) If there is data missing which does not show up with explanation in the errorlog output, contact Savannah.
- Lists all possible individual plots if, for instance, there was a mistake plotting the current and you do not want to wait through plotting all other measurements to fix it.
- Usually a test feature. This can be modified by the user in CSVreader.py. Not intended for standard commissioning.
Example of Option 2 menu:
2 >> Plot single measurement
Choose one measurement to plot.
1> Humidity
2> SCT
3> Frame Temp
4> Leakage Current (PP)
5> Leakage Current (Splitter Cable)
6> LV Current
7> LV Voltage
8> High Voltage (PP)
9> High Voltage (Splitter)
10> Temperature vs Leakage Current
11> Normalized Leakage Current
Select from list.
If CSVreader was chosen in Assembler, a pdf containing all plots is generated after CSVreader finishes. The pdf and images are saved to the working directory.
IVreader
Arrive here by choosing option 1 in the Assembler menu or by directly running IVreader.py.
===============================
IV Plot maker
===============v11=============
**NOTE:** To ensure the IV plot is representative
of data, a test plot is displayed before the final plot
that shows Current vs time with selected points as dots.
The dots should be in the valleys of the Current measurement.
changed in the code.
-----
MAIN MENU
1 > Basic IV Plotter
2 > Sum IV Plotter
Choose from the menu.
As before, you will need to type an integer to select what sort of plot you are doing.
- Basic IV Plotter: Allows you to enter multiple file names for CSV and txt files of the correct format. On-screen instructions follow.
- Sum IV Plotter: For testing patch panels. For example, checking that splitter cable channels 0-3 add up to patch panel cable 00.
Script Authors
--
SavannahShively - 2020-05-13
* Set ALLOWTOPICVIEW = CERN-Search-Service, faser-all