Development

Date Action
2015-05-11 Testing rcd_empty
  • Added Jorgens schema file for rcd_trb package - can do this with oks_data_editor:
    Right click on a Data File and select 'Details', right click in the 'Include files' list and select a location in the 'Add from' sub menu.
  • Using schema, created a ReadoutModuleTRB object called TRBModule contained by the RCD object called BL4SApp
  • Created a HW_InputChannel called TRB_channel_0 contained by TRBModule and a resource of the Segment called BL4SSegment
  • Compiled all modules in dbg mode
  • Switched to prefer dbg code and enabled tracing on RCDExample packages
  • Ran BL4S partition and confirmed configuration and transitions of modules were as expected
  • 2015-05-07 Monitoring beam muons
  • Spoke to Sergeui.Kolos about oh_display and ohp - directed us to /afs/cern.ch/atlas/project/tdaq/inst/tdaq/tdaq-05-05-00/ohp/share/example.conf.xml
  • Started first recording of beam muons on H8 line (run 1431024945)
  • 2015-05-03 Very long run testing
  • Some PMG timeouts around 07:06
  • BL4SApp starts throwing errors:
    20:05:12 ERROR BL4SApp rc::ParentUpdateFailure Failed to notify the parent controller "BL4SSegment" about changes in my status
    20:05:11 ERROR BL4SApp rc::CorbaException Received CORBA exception "TRANSIENT" when interacting with BL4SSegment
  • IGUI commands raise the following errors: Command Error Close
    IGUI - Command Error
     
    
    Message:
        daq.rc.RCException$CORBAException: Failed contacting "BL4SSegment". Reason: org.omg.CORBA.TRANSIENT: Retries exceeded, couldn't reconnect to 137.138.89.83:34362  vmcid: 0x0  minor code: 0  completed: No
    Level:
        SEVERE
    Stack Trace:
    Failed contacting "BL4SSegment". Reason: org.omg.CORBA.TRANSIENT: Retries exceeded, couldn't reconnect to 137.138.89.83:34362 vmcid: 0x0 minor code: 0 completed: No
        daq.rc.CommandSender.executeCommand(CommandSender.java:376)
        Igui.Igui.sendControllerCommand(Igui.java:1919)
        Igui.RunControlAdvancedPanel$ButtonActions$1.run(RunControlAdvancedPanel.java:249)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
    Retries exceeded, couldn't reconnect to 137.138.89.83:34362
        org.jacorb.orb.iiop.ClientIIOPConnection.connect(ClientIIOPConnection.java:226)
        org.jacorb.orb.giop.GIOPConnection.sendMessage(GIOPConnection.java:1072)
        org.jacorb.orb.giop.GIOPConnection.sendRequest(GIOPConnection.java:1017)
        org.jacorb.orb.giop.ClientConnection.sendRequest(ClientConnection.java:308)
        org.jacorb.orb.giop.ClientConnection.sendRequest(ClientConnection.java:289)
        org.jacorb.orb.Delegate._invoke_internal(Delegate.java:1419)
        org.jacorb.orb.Delegate.invoke_internal(Delegate.java:1244)
        org.jacorb.orb.Delegate.invoke(Delegate.java:1232)
        org.omg.CORBA.portable.ObjectImpl._invoke(ObjectImpl.java:475)
        rc._commanderStub.executeCommand(_commanderStub.java:1185)
        daq.rc.CommandSender.executeCommand(CommandSender.java:358)
        Igui.Igui.sendControllerCommand(Igui.java:1919)
        Igui.RunControlAdvancedPanel$ButtonActions$1.run(RunControlAdvancedPanel.java:249)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
    Transition Error Close
    IGUI - Transition Error
     
    
    Message:
        daq.rc.RCException$CORBAException: Failed contacting "RootController". Reason: org.omg.CORBA.TRANSIENT: Retries exceeded, couldn't reconnect to 137.138.89.83:42921  vmcid: 0x0  minor code: 0  completed: No
    Level:
        SEVERE
    Stack Trace:
    Failed contacting "RootController". Reason: org.omg.CORBA.TRANSIENT: Retries exceeded, couldn't reconnect to 137.138.89.83:42921 vmcid: 0x0 minor code: 0 completed: No
        daq.rc.CommandSender.executeCommand(CommandSender.java:431)
        daq.rc.CommandSender.makeTransition(CommandSender.java:468)
        Igui.Igui.sendRootControllerTransitionCommand(Igui.java:1890)
        Igui.MainPanel$8.run(MainPanel.java:2606)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
    Retries exceeded, couldn't reconnect to 137.138.89.83:42921
        org.jacorb.orb.iiop.ClientIIOPConnection.connect(ClientIIOPConnection.java:226)
        org.jacorb.orb.giop.GIOPConnection.sendMessage(GIOPConnection.java:1072)
        org.jacorb.orb.giop.GIOPConnection.sendRequest(GIOPConnection.java:1017)
        org.jacorb.orb.giop.ClientConnection.sendRequest(ClientConnection.java:308)
        org.jacorb.orb.giop.ClientConnection.sendRequest(ClientConnection.java:289)
        org.jacorb.orb.Delegate._invoke_internal(Delegate.java:1419)
        org.jacorb.orb.Delegate.invoke_internal(Delegate.java:1244)
        org.jacorb.orb.Delegate.invoke(Delegate.java:1232)
        org.omg.CORBA.portable.ObjectImpl._invoke(ObjectImpl.java:475)
        rc._commanderStub.makeTransition(_commanderStub.java:1081)
        daq.rc.CommandSender.executeCommand(CommandSender.java:413)
        daq.rc.CommandSender.makeTransition(CommandSender.java:468)
        Igui.Igui.sendRootControllerTransitionCommand(Igui.java:1890)
        Igui.MainPanel$8.run(MainPanel.java:2606)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
  • Logs from this time:
    /tmp/part_BL4S/RCD@BL4SApp@TestRCAppRunning-test_app_pmg_bl4sdaq.cern.ch_1430628559.err
    /tmp/part_BL4S/RCD@BL4SApp@TestRCAppRunning-test_app_pmg_bl4sdaq.cern.ch_1430628579.err
    /tmp/part_BL4S/BL4SSegment_bl4sdaq.cern.ch_1430411323.err
    /tmp/part_BL4S/RootController_bl4sdaq.cern.ch_1430411315.err
    /tmp/part_BL4S/RCD_Monitor_bl4sdaq.cern.ch_1430411369.out
    /tmp/part_BL4S/RCD_Monitor_bl4sdaq.cern.ch_1430411369.err
    /tmp/part_BL4S/monitoring-conductor_bl4sdaq.cern.ch_1430411315.out
    /tmp/part_BL4S/core.26581
  • PMG lists the following: pmg_list_partition -p part_BL4S Close
    Asking all the agents about processes running in partition part_BL4S
    APPLICATION            PARTITION   HANDLE                                                   
    CHIP                   part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/CHIP/1                   
    DDC                    part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/DDC/1                    
    DF                     part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/DF/1                     
    DFConfig               part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/DFConfig/1               
    DQM                    part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/DQM/1                    
    Histogramming          part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/Histogramming/1          
    ISRepository           part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/ISRepository/1           
    MTS                    part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/MTS/1                    
    Monitoring             part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/Monitoring/1             
    PMG                    part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/PMG/1                    
    RDB                    part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RDB/1                    
    RDB_POOL_1             part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RDB_POOL_1/1             
    RDB_RW                 part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RDB_RW/1                 
    ResInfoProvider        part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/ResInfoProvider/1        
    Resources              part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/Resources/1              
    RunCtrl                part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RunCtrl/1                
    RunCtrlStatistics      part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RunCtrlStatistics/1      
    RunParams              part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/RunParams/1              
    Setup                  part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/Setup/1                  
    ipc-server             part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/ipc-server/1             
    monitoring-conductor   part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/monitoring-conductor/1   
    mts-worker             part_BL4S   pmg://bl4sdaq.cern.ch/part_BL4S/mts-worker/1             
    BL4SApp                part_BL4S   pmg://lnxpool41.cern.ch/part_BL4S/BL4SApp/2
  • 2015-04-30 Long run testing
  • Ran part_BL4S and TRT_Testbeam overnight
  • Both remote IGUI instances had gone
  • part_BL4S had lost a few processes
  • Later on, PMG was briefly unavailable: - This may be due to some of the partition being restarted with a new IGUI - will retest with a clean partition
  • Agent log: bl4sdaq.cern.ch:/logs/tdaq-05-05-00/initial/pmg_agent_bl4sdaq.cern.ch_1430324493.out
  • Last job (test_app_pmg -p part_BL4S -H lnxpool41.cern.ch -a BL4SApp) ran at 17:52:27
  • Next batch of jobs all started at 17:52:49
  • RootController (bl4sdaq:/logs/tdaq-05-05-00/initial/DefaultRootController_bl4sdaq.cern.ch_1430324568.err) raised TIMEOUT at 17:52:44
  • 2015-04-29 Integrated monitoring
  • Fixed issue with v792 channel labelling - turns out the upper and lower 16 channels are interleaved with each other
  • Have rcd_monitor running continuously, collecting histograms every 10 seconds
  • Added RCD_Monitor to BL4S configuration database as a CustomLifetimeApplication under the BL4SSegment, lifetime set to SOR_EOR so the monitoring is available when running
  • Worked with Emre to test TRT_Monitor on TRT and BL4S hardware
  • 2015-04-26 Modified DAQ school monitoring code to monitor all channels
  • Saved in new module RCDMonitor
  • bl4sdaq paused for ~7 minutes with the message:
    Apr 26 17:31:16 bl4sdaq kernel: volume 537093198 is busy or server is down, rechecking
  • Strange monitoring-conductor error:
    omniORB: From endpoint: giop:tcp:137.138.89.83:51793. Detected GIOP 1.2 protocol error in input message. giopImpl12.cc:411. Connection is closed.
  • 2015-04-22 Run DAQ school monitoring program in BL4S partition
  • CmdLine object needs a NULL instead of 0
  • TDC data now has a timestamp as 'channel 0' and the first physical channel is now named 'channel 1'
  • Included TH2.h to produce 2D histograms
  • Published a 2D histogram of charge from QDC0 vs. QCD1
  • ROOT still has the horrific 16-bit colour palette
  • 2015-04-13 Run TRT partition on TRT hardware
  • Fixed pmgserver issue by removing reject rule in iptables for pc-test-trt-01 - thanks to Giovanna (See HowTo#Setting_up_computers for more)
  • Set addresses to match the TRT hardware
  • 2015-04-09/10 Set up TEST-TRT PCs
  • Ran into trouble running TRT_Testbeam partition - Giovanna helped killing a rogue IPC server and diagnosing a linux ulimit on threads that was causing applications to fail randomly
  • Had trouble getting the PMG server to start on the TRT SBC (lnxpool46) - Actually it started all right but couldn't communicate back to its client. The client needs to have iptables rules relaxed from the default.
  • 2015-04-01 Met with TRT team for discussion of requirements
  • Forked bl4sdaq repository to have a TRT package: https://svnweb.cern.ch/cern/wsvn/bl4sdaq/TRTTestBeamDaq
  • Ran BL4S DAQ with pulser input - achieved 10kHz readout of full TDC buffer and 32x QDC channels
  • 2015-03-26 Updated to tdaq-05-05-00 with the help of Jorgen and Per
  • Having problems with passwordless login to lnxpool41 - need root access to debug server side of connection - daquser seems to have no such problems.
  • This works fine on sbctest-717
  • setup_daq sometimes has problems starting the pmgserver on lnxpool41 - probably as a result of the above.
  • 2015-03-26 Updating to tdaq-05-05-00
  • Remember to update the project.cmt file or you'll include headers from the old version.
  • Updated tags for the partition and sw_repository objects in the database.
  • Starting the partition, have error message that "/logs" doesn't exist - defined in the LogsRoot attribute of the partition (=LogsRoot= is some nonsense in the setup_daq script - the message comes from checking $TDAQ_LOGS_PATH, $TDAQ_RESULTS_LOGS_PATH and $TDAQ_BACKUP_PATH which are populated by rc_print_partition_env).
    The path is correctly defined in the LogRoot attribute. Can't see this new attribute in the tdaq-05-05-00 schema. - Looks to be from the initial partition defined in '/afs/cern.ch/atlas/project/tdaq/databases/v41/daq/segments/setup-initial.data.xml' but that is LogRoot = '/logs/${TDAQ_VERSION}'
  • The initial partition tries to connect to pc-atd-cc-02.cern.ch - while I was working, someone modified the database file above to fix this.
  • IGUI Main Panel Error when clicking 'Set Values' in the Run Settings: Main Panel Error Close
    Main Panel Error
     
    
    Message:
        java.util.concurrent.ExecutionException: Igui.IguiException$ISException: Checkout from RunParams IS server failed: is.InfoNotCompatibleException
    Level:
        SEVERE
    Stack Trace:
    Igui.IguiException$ISException: Checkout from RunParams IS server failed: is.InfoNotCompatibleException
        java.util.concurrent.FutureTask.report(FutureTask.java:122)
        java.util.concurrent.FutureTask.get(FutureTask.java:188)
        javax.swing.SwingWorker.get(SwingWorker.java:602)
        Igui.MainPanel$RunParamsUpdater.done(MainPanel.java:1416)
        javax.swing.SwingWorker$5.run(SwingWorker.java:737)
        javax.swing.SwingWorker$DoSubmitAccumulativeRunnable.run(SwingWorker.java:832)
        sun.swing.AccumulativeRunnable.run(AccumulativeRunnable.java:112)
        javax.swing.SwingWorker$DoSubmitAccumulativeRunnable.actionPerformed(SwingWorker.java:842)
        javax.swing.Timer.fireActionPerformed(Timer.java:312)
        javax.swing.Timer$DoPostEvent.run(Timer.java:244)
        java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:251)
        java.awt.EventQueue.dispatchEventImpl(EventQueue.java:733)
        java.awt.EventQueue.access$200(EventQueue.java:103)
        java.awt.EventQueue$3.run(EventQueue.java:694)
        java.awt.EventQueue$3.run(EventQueue.java:692)
        java.security.AccessController.doPrivileged(Native Method)
        java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
        java.awt.EventQueue.dispatchEvent(EventQueue.java:703)
        java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
        java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
        java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
        java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
        java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
        java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
    Checkout from RunParams IS server failed: is.InfoNotCompatibleException
        Igui.MainPanel$RunParamsUpdater.doInBackground(MainPanel.java:1386)
        Igui.MainPanel$RunParamsUpdater.doInBackground(MainPanel.java:1343)
        javax.swing.SwingWorker$1.call(SwingWorker.java:296)
        java.util.concurrent.FutureTask.run(FutureTask.java:262)
        javax.swing.SwingWorker.run(SwingWorker.java:335)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
    null
        is.Repository.getValue(Repository.java:355)
        is.NamedInfo.checkout(NamedInfo.java:124)
        Igui.MainPanel$RunParamsUpdater.doInBackground(MainPanel.java:1380)
        Igui.MainPanel$RunParamsUpdater.doInBackground(MainPanel.java:1343)
        javax.swing.SwingWorker$1.call(SwingWorker.java:296)
        java.util.concurrent.FutureTask.run(FutureTask.java:262)
        javax.swing.SwingWorker.run(SwingWorker.java:335)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
    - This was due to an old log file that was being incorrectly parsed for the run settings. Deleting the logs forced IGUI to fill in the default values and allowed them to be set.
  • 2015-03-25 First run of BL4S partition in 2015!
  • ReadoutApplication hung while looking for VME modules - base addresses were mismatched between database and hardware.
  • CenkYildiz will commit current state into SVN then tag as last version of BL4S2014 DAQSoftware. BL4S2015 development can continue from there.
  • Tried debugging ReadoutApplication by setting TraceLevel and TracePackage - MAKE SURE TO RESET THIS
  • Next MajorTask is to run with tdaq-05-05-00. Will need to switch CMTCONFIG to use gcc48.
  • May look at diff of the readout modules between the bl4sdaq repository and ATLAS SVN with a view to pushing changes upstream.
  • MonitoringSoftware expects data layout as in 2014 and so will not run with the lab test setup. Could split this into low- and high-level monitoring, such that low-level monitoring only looks at available modules and performance metrics. High-level monitoring could perform reconstruction and physics analysis under an assumed detector configuration.
  • Resources

    ATLAS

    BL4S

    Doxygen
    http://test-bl4sdoc.web.cern.ch/test-bl4sdoc/index.html

    Instructions

    A selection of instructions for DAQ set up follows. Much of this is encapsulated in setupBL4SDAQ_bash on the daquser account (See the SysAdmin page).

    Using EOS on SLC6

    The default eos alias seems to depend on old versions of libreadline and libcrypto. Currently these are in the readline and openssl packages. The currently working versions (as of 2015-05-06) can be installed with:
    yum install compat-readline5.x86_64 openssl098e.x86_64

    Setting up an ATLAS TDAQ release

    The quick version:

    • Let $TDAQ_RELEASE be the release version, e.g: TDAQ_RELEASE=tdaq-05-05-00
    • Let $CMTCONFIG be the system architecture, e.g: CMTCONFIG=x86_64-slc6-gcc48-opt

    Use CMT to set up the release:

    source /afs/cern.ch/atlas/project/tdaq/cmt/bin/cmtsetup.sh $TDAQ_RELEASE $CMTCONFIG

    Use a local IPC server

    TDAQ_IPC_INIT_REF=file:/afs/cern.ch/user/${USER:0:1}/${USER}/public/bl4s-ipc-ref/ipc_root.ref

    Set up database

    Let ${TDAQ_PACKAGE_ROOT} be the location where the DataFlow package is checked out. E.g: TDAQ_BL4S_ROOT=$HOME/public/DAQ/DataFlow

    Note that this directory needs to be world readable since many processes will be reading it from multiple locations. See the ATLAS DAQ training.

    The default location should be the installed package. After that are locations with more generic configuration files.

    TDAQ_DB_PATH=${TDAQ_PACKAGE_ROOT}/installed/share/data:${TDAQ_DB_PATH}
    The starting point of the database is usually the partition xml. This pulls other files that are needed into the database.
    TDAQ_DB_DATA=${TDAQ_PACKAGE_ROOT}/installed/share/data/daq/partitions/${TDAQ_DB_NAME}.data.xml
    Tell OKS to use the xml files to build the database.
    TDAQ_DB=oksconfig:${TDAQ_DB_DATA}

    Checking out a package from ATLAS SVN repository

    cd ~daquser/public/DAQ/DataFlow
    /afs/cern.ch/atlas/project/tdaq/cmt/bin/getpkg DAQ/DataFlow/<package>
    

    Per's TDAQ Cheat Sheet

    A set of very useful commands (explanations should be added)

    setup_daq -d $TDAQ_DB_DATA -p $TDAQ_PARTITION 2>&1 | tee $HOME/setup_daq.out      # Starting a partition
    ispygui.py $TDAQ_PARTITION&                                                       # 
    oks_data_editor $TDAQ_DB_DATA &                                                   # Edit a partition file with oks_data_editor
    nedit $TDAQ_DB_DATA &                                                             # Edit a partition with text editor(discouraged)
    is_ls -p ${TDAQ_PARTITION}  -n "DF" -v -R "L2SV-1.*"                              # List IS in the partition
    ipc_ls -p ${TDAQ_PARTITION}                                                       # List applications for the given partition
    rm_free_all_resources -p ${TDAQ_PARTITION}                                        # 
    pmg_list_partition -p $TDAQ_PARTITION                                             # List all processes within the partition
    pmg_kill_partition -p $TDAQ_PARTITION                                             # Kill the partition manually
    rc_checkapps -d oksconfig:$TDAQ_DB_DATA -p $TDAQ_PARTITION                        #
    ipc_ls -l                                                                         # List applications with registered IPC interfaces 
    ipc_clean -p ${TDAQ_PARTITION}                                                    # 
    ipc_rm -p ${TDAQ_PARTITION} -i ".*" -n ".*"                                       # Shutdown applications for the given partition
    dal_dump_app_config -d $TDAQ_DB -p $TDAQ_PARTITION                                # Show applications and their configuration parameters
    dal_test_timeouts -d oksconfig:$TDAQ_DB_DATA -p $TDAQ_PARTITION -s LVL2Farm       # 
    rdb_admin -d $TDAQ_DB_DATA -p $TDAQ_PARTITION -r all -m                           # Reload modified database files
    rdb_admin -d ISRepository -p $TDAQ_PARTITION -l $DF_WORK/installed/share/data/hltsv/schema/hltsv_is.schema.xml
    
    #Addition of Cenk:
    oks_dump $TDAQ_DB_DATA > /dev/null                                               #Show errors of current partition database
                                                                                      #Can be used to check problems after modifying database via vim
    

    Setting CPU affinity

    The default CPU affinity for the SBC may not be ideal. To set it, go to RCD instance in the database, and set the parameters to:

     -a -1
    This way CPU usage will be divided more equally to all cores. To see how it affected:
    • go to SBC (lnxpool41) and enter the command:
    top -H -p $(pidof ReadoutApplication)
    • Press 1. Like this you'll see work done on each core seperately

    Readout order for the VME modules

    • To change the order of VME modules, one should find the RCD instance(for BL4S it is called BL4SApp), and change the order in the "contains" property. For instance to have the data of V792 written before V1290, one should keep V792Module above V1290Module.
    • Note that changing order of the Input Channels in the Resources property of the Segment does not change anything.
    • Also the BL4S analysis is written in a way that this order does not matter.

    Debugging

    • In the ReadoutConfiguration instance, there is TraceLevel and TracePackage.
    • ID of each package can be checked by DFDebug_menu command. For instance RCDEXAMPLE has number 300.
    • In the code, there are debugging output streams as following. This particular one will print output if TraceLevel is set higher or equal 15, and TracePackage is set to 300 or 0(show all packages).
      • DEBUG_TEXT(DFDB_RCDEXAMPLE, 15, "DataChannelV792::constructor: Entered")

    Data Flow (Writing data, and sending fragments for monitoring)

    One can use following DataOut Plugins:

    Note that TCPDataOut and DCDataOut are for bigger systems, when one wants to gather fragments from different Applications (for instance apps that run on seperate VME crates), and builld event. We'll use FileDataOut.

    Instances created from one of those can be used in 3 places in the RCD Application Configuration:

    • Output -> Main Output (Set to an instance of FileDataOut)
    • MonitoringOutput -> If left empty, it is set to EmonDataOut with default configuration, so that one can read events one by one and use data to publish histograms in OH? In our case, we can build our Monitoring program using Emon.
    • DebugDataOut -> 3rd output, for debugging

    The EmonDataOut is configured in ReadoutConfiguration instance by MonitoringScalingFactor property. If it is set to 100, 1 of 100 events will be send to monitoring (Not sure if its correct after talking to Serguei)

    DFCountedPointer -> Variables you can publish on igui

    See DFCountedPointer in DataChannel792 code

    Changing subscription criteria for errors/warnings in IGUI

    following expression can be subscribed on ATLAS TDAQ Software
    &#8203;(sev=WARNING or sev=ERROR or sev=FATAL) and (not app=CHIP*)

    Event Format

    Since we write events with FileDataOut, we only have ROD fragments in the data files, and not Full Event or ROB fragments. This cause our events not being readable by ATLAS eformat library. If one wants to be able to use eformat, one has to send events to a server by TCPDataOut, and build events with the Event Builder. This would allow one to use 1 or more VME crates, and read events by ATLAS eformat. On the plus side, since the VME SBC is limited by the 3MB/sec, one could use multiple crates to speed up the daq.

    Information Services (IS) and publishing variables. (In progress)

    • A schema file should be created with the variables information. Inside the file one should have:

    <include>                 
     <file path="ROSCore/ROSCoreInfo.schema.xml"/>
    </include>                
                              
     <class name="DataChannelMyModuleInfo" description="Statistics from a MyModule DataChannel">
      <superclass name="SingleFragmentDataChannelInfo"/>
      <attribute name="myvar1" description="Min. number of polling cycles before DREADY" type="u32"/>
      <attribute name="myvar2" description="Max. number of polling cycles before DREADY" type="u32"/>                                                                                           
     </class>                 
    

    • This schema file has to be loaded to the rdb_server called ISRepository belonging to the partition which you are running. To have this done automatically one has to add the schema file name to the ISInfoDescriptionFiles attribute of a SW_Repository object which is inputted to the Uses attribute of the application.

    • The requirements file of the particular package should have following lines:

    document is-generation DataChannelMyModuleInfo               -s=../schema namespace="ROS" header_dir="ROSInfo" MyModuleInfo.schema.xml
    
    public
    #==========================================================
    apply_pattern install_libs files=...(Depending on the software)
    apply_pattern install_apps files=...(Depending on the software)
    
    apply_pattern install_data    name=schema  src_dir="../schema" files="*.xml"
    
    apply_pattern install_headers name=is_info src_dir="$(bin)/ROSInfo" files="*.h" target_dir="../ROSInfo"
    
    ## Automated generation of repository db
    macro sw.repository.is-info-file.share/data/rcd_MyModule/MyModuleInfo.schema.xml:name "MyModule DataChannel IS xml description"
    

    For an example, one can see daquser@lxplus:~/public/BL4S_DAQ/DataFlow/rcd_v792/cmt/requirements

    Debian packages

    To run ATLAS TDAQ programs under Debian/Ubuntu - the following packages are required:
    • libjpeg62
    • libXm
    • libtk8.5

    Notes

    Event dump explanation (from https://espace.cern.ch/project-blfs/detlog/Lists/Posts/Post.aspx?ID=267).
    Word Explanation
    1234cccc Seperator Marker
    00000004 Number of words in seperator
    003d01a0 Number of following block of data within this file sequence
    00000198 Size of block before next seperator (there are 49 words, this marker shows 198 = 49*4, which means it is number of bytes)
       
    ee1234ee Event Start header marker
    00000009 Number of words in header
    03010000 Format version no (same with real atlas events)
    00510054 Source identifier (0051: sub det, 0054: Module ID)
    5418751b Run No (Repeated in whole file)
    003d019f Level 1 ID
    003d019f Bunch Crossing ID
    00000000 Level 1 Trig Type
    00000000 Detector Event Type
       
    00510002 Channel ID (v792)
    00000022 Number of words in next block (34: 32 data + 1 header + 1 EOB)
    fa012000 QDC Data words(Details in DataChannelV792.cpp)
    f8004036 ...
    f810404f  
    f8014054  
    f811406f  
    f8024056  
    f812405c  
    f803406a  
    f8134075  
    f8044068  
    f814405c  
    f8054052  
    f815405d  
    f80640ab  
    f8164077  
    f8074077  
    f8174068  
    f808405e  
    f818408a  
    f8094072  
    f8194072  
    f80a407f  
    f81a407c  
    f80b406b  
    f81b406f  
    f80c406c  
    f81c406c  
    f80d406e  
    f81d4083  
    f80e4079  
    f81e4070  
    f80f4075  
    f81f4079  
    fc3d05ee QDC Trailer
       
    00510003 Channel ID(1st v1290)
    47a033ff TDC Data Words(Details in DataChannelV1290.cpp)
    0819f119 ...
    00001333  
    00601309  
    0080173d  
    00c011ac  
    00201c12  
    00401758  
    008028e8  
    00e0203b  
    000052af  
    00a023fa  
    1819f00c  
    0919f119  
    010017c9  
    01600c99  
    01201c9e  
    0140207a  
    1919f006  
    8000029f Global TDC trailer
       
    00510006 Channel ID(2nd v1290)
    47a033ff TDC Data Words(Details in DataChannelV1290.cpp)
    0819f215 ...
    0080069f  
    00c00873  
    00a005ed  
    1819f005  
    0919f215  
    01000691  
    1919f003  
    8000015f Global TDC trailer
       
    00510004 Channel ID(v560)
    00000010 Scaler Data Words(Details in DataChannelV560.cpp)
    003d01a0  
    00638db6  
    0221ec1e  
    02178a1a  
    02302813  
    01636ba4  
    060a8b03  
    00638dac  
    00000000  
    00000000  
    00293f71  
    024cd812  
    00000000  
    00000000  
    00000000  
    00000000  
       
    00000000 Status Word 1
    00000000 Status Word 1
    00000000 Status Word 1
    00000000 Status Word 1
    00000004 Number of Status Words
    00000056 Data Words in the event (86 = 36(QDC)+21(TDC1)+11(TDC2)+18(Scaler)
    00000001 Status Position(0: Status -> Data, 1: Data -> Status)

    Contacts

    -- CenkYildiz - 17 Mar 2014 -- CenkYildiz - 26 Mar 2014

    Edit | Attach | Watch | Print version | History: r47 < r46 < r45 < r44 < r43 | Backlinks | Raw View | WYSIWYG | More topic actions
    Topic revision: r47 - 2015-06-09 - TimBrooks
     
      • Cern Search Icon Cern Search
      • TWiki Search Icon TWiki Search
      • Google Search Icon Google Search

      BL4S All webs login

    This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
    Ideas, requests, problems regarding TWiki? Send feedback