-- KaVangTsang - 2018-03-05---+!! RCE Setup

Starting the RCE GUI

from np04-srv-XXX
source /nfs/sw/rce/setup.sh 
rce_talk CERN-1 start_gui  #where 1 means COB 1

Setting/unsetting FEB emulation mode on the RCEs

Real WIB Data

Assuming the config is called "Coldbox":

edit Coldbox/user_run_options.fcl and set

daq.fragment_receiver.rce_feb_emulation_mode: false

Emulated Data

Assuming the config is called "Coldbox":

edit Coldbox/user_run_options.fcl and set

daq.fragment_receiver.rce_feb_emulation_mode: true


https://confluence.slac.stanford.edu/display/RPTUSER/SDK+Download+and+Installation Login to lxplus.
  wget http://www.slac.stanford.edu/projects/CTK/SDK/rce-sdk-latest.tar.gz 

Set up the SDK:

Login to pddaq-gen02-ctrl0
[matt@pddaq-gen02 rce]$ cd /daq/rce/
[matt@pddaq-gen02 rce]$ scp  magraham@lxplus.cern.ch:~/rce/rce-sdk-latest.tar.gz .
[matt@pddaq-gen02 rce]$ tar -xvf rce-sdk-latest.tar.gz

For the On-RCE code: =lxplus> svn checkout svn+ssh://rhel6-64.slac.stanford.edu/afs/slac/g/reseng/svn/repos/DUNE/trunk . = Then scp to to pddaq-gen02-ctrl0

[matt@pddaq-gen02 rce]$ scp -r magraham@lxplus:~/rce/firmware .
[matt@pddaq-gen02 rce]$ scp -r magraham@lxplus:~/rce/software .
(NOTE, as of 1/27/17 we are using the 35ton code that lives in .../repos/LBNE/trunk)

================================= Some notes on getting the self set up (should only have to do this when the shelf is moved)

Shelf Manager: Login to as root Set the shelf name: clia shelfaddress Set the shelf IP param: =clia setlanconfig 1 3 =

Get RCE mac addresses, used to set up the reserved dhcp addresses: cob_dump_bsi --all

Add them to the DHCP config: /etc/dhcp/dhcpd.conf

option domain-name "pddaq";
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
not authoritative;
log-facility local7;
option ip-forwarding false;
option mask-supplier false;

deny unknown-clients;

#### Our Subnet, IP address Pool and gateway/router
subnet netmask {
  interface enp10s0;
  #range dynamic-bootp;
  option broadcast-address;
  option routers;

        host rce1 {
         hardware ethernet 08:00:56:00:43:ca;
         option host-name "rce1";
To start (or restart) DHCP server:

[matt@pddaq-gen02 ~]$  sudo systemctl restart dhcpd.service

To see if the RCEs got the IPs, look at:

[matt@pddaq-gen02 ~]$ sudo tail /var/log/messages

Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPDISCOVER from 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPOFFER on to 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPREQUEST for ( from 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPACK on to 08:00:56:00:43:c3 via enp10s0


Setup an NFS share https://www.howtoforge.com/nfs-server-and-client-on-centos-7 Add to /etc/exports:


restart nfs as follows: sudo exportfs -a ; sudo systemctl restart nfs don't forget to make sure the firewall is off forever

    sudo systemctl stop firewalld
    sudo systemctl disable firewalld

Setting up the /daq/rce/software/35ton/rceScripts/rce_talk scipt:

We need to tell rce_talk where the RCEs are….add the ip addresses to the code like this:

   self.slot_map= {
            # CERN Defined RCEs
            ( '' ):'CERN-SM',
            ( '' ):'CERN-100',
            ( '' ):'CERN-102',
            ( '' ):'CERN-110',
            ( '' ):'CERN-112',
            ( '' ):'CERN-120',
            ( '' ):'CERN-122',
            ( '' ):'CERN-130',
            ( '' ):'CERN-132',
            ( '' ):'CERN-DTM1' 

...then you can do things like this: [matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-102 check_host .. done

[CERN-102]::stdout:: 4.0.0-xilinx-11503-gca893ab [CERN-102]::stdout:: 23:38:41 up 6:38, 0 users, load average: 0.07, 0.03, 0.05 [CERN-102]::stdout:: root 2365 17.0 0.9 367232 9628 ? Sl 23:38 0:03 bin/rceServer [CERN-102]::stdout:: Filesystem Size Used Avail Use% Mounted on [CERN-102]::stdout:: /dev/root 7.8G 2.2G 5.3G 30% /



Once the RCE IPs and nfs server are set up, we can change the “axistreamdma.sh” script (which is run at RCE bootup) to mount nfs directory, install the AxiStream driver on the RCE, and start the rceServer. The script is in the “software/35ton/rceData” directory and, for the CERN setup in buiding 4, should look like this:

[=matt@pddaq-gen02 35ton]$ cat rceData/axistreamdma.sh=


mkdir -p /mnt/host

mount -t nfs /mnt/host

insmod /mnt/host/35ton/AxiStreamDma/driverV3_4.00/AxiStreamDmaModule.ko cfgRxSize=2048,2048,327680,0 cfgRxCount=8,8,800,0 cfgTxCount=8,8,0,0 cfgRxAcp=0,0,1,0

chmod a+rw /dev/axi*


… this file needs to be put on each RCE at /bin/axistreamdma.sh, which you can do using rce_talk:

[matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-1 scp_put ../rceData/axistreamdma.sh /bin/axistreamdma.sh

Then, you can either reboot the COB (rce_talk CERN-SM powercycle_cob ) or just run the command using rce_talk

[matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-1 ssh_cmd /bin/axistreamdma.sh

Check to see if rceServer is running on the RCEs using the check_host command with rce_talk.

NOTE!!! You have to do the same thing for the DTM...replace “CERN-1” with “CERN-DTM”


To run the RCE GUI, run rce_talk CERN-1XX start_gui To put the RCE in emulation mode…in commands tab, hit the “StartDebugFebEmu”.


Change trigger rate from DTM : edit defaults.xml -> = 0x61A800 = -- this means 10 Hz. (wait time in clock cycles with 64 MHz clock)

Change DAQ destination : edit defaults.xml -> =

-- KarolH - 2017-02-14

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2018-03-05 - PatrickTsang1
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CENF All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback