--
KaVangTsang - 2018-03-05---+!! RCE Setup
Starting the RCE GUI
from np04-srv-XXX
source /nfs/sw/rce/setup.sh
rce_talk CERN-1 start_gui #where 1 means COB 1
Setting/unsetting FEB emulation mode on the RCEs
Real WIB Data
Assuming the config is called "Coldbox":
edit Coldbox/user_run_options.fcl
and set
daq.fragment_receiver.rce_feb_emulation_mode: false
Emulated Data
Assuming the config is called "Coldbox":
edit Coldbox/user_run_options.fcl
and set
daq.fragment_receiver.rce_feb_emulation_mode: true
RPT SDK:
https://confluence.slac.stanford.edu/display/RPTUSER/SDK+Download+and+Installation
Login to lxplus.
wget http://www.slac.stanford.edu/projects/CTK/SDK/rce-sdk-latest.tar.gz
Set up the SDK:
Login to pddaq-gen02-ctrl0
[matt@pddaq-gen02 rce]$ cd /daq/rce/
[matt@pddaq-gen02 rce]$ scp magraham@lxplus.cern.ch:~/rce/rce-sdk-latest.tar.gz .
[matt@pddaq-gen02 rce]$ tar -xvf rce-sdk-latest.tar.gz
For the On-RCE code:
=lxplus> svn checkout svn+ssh://rhel6-64.slac.stanford.edu/afs/slac/g/reseng/svn/repos/DUNE/trunk . =
Then scp to to pddaq-gen02-ctrl0
[matt@pddaq-gen02 rce]$ scp -r magraham@lxplus:~/rce/firmware .
[matt@pddaq-gen02 rce]$ scp -r magraham@lxplus:~/rce/software .
(NOTE, as of 1/27/17 we are using the 35ton code that lives in .../repos/LBNE/trunk)
=================================
Some notes on getting the self set up (should only have to do this when the shelf is moved)
Shelf Manager:
Login to 192.168.1.2 as root
Set the shelf name:
clia shelfaddress
Set the shelf IP param: =clia setlanconfig 1 3 192.168.1.2 =
Get RCE mac addresses, used to set up the reserved dhcp addresses:
cob_dump_bsi --all 192.168.1.2
Add them to the DHCP config:
/etc/dhcp/dhcpd.conf
option domain-name "pddaq";
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
not authoritative;
log-facility local7;
option ip-forwarding false;
option mask-supplier false;
deny unknown-clients;
#### Our Subnet, IP address Pool and gateway/router
subnet 192.168.10.0 netmask 255.255.255.0 {
interface enp10s0;
#range dynamic-bootp 192.168.10.10 192.168.10.200;
option broadcast-address 192.168.10.255;
option routers 192.168.10.1;
host rce1 {
hardware ethernet 08:00:56:00:43:ca;
fixed-address 192.168.10.11;
option host-name "rce1";
}
…
}
To start (or restart) DHCP server:
[matt@pddaq-gen02 ~]$ sudo systemctl restart dhcpd.service
To see if the RCEs got the IPs, look at:
[matt@pddaq-gen02 ~]$ sudo tail /var/log/messages
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPDISCOVER from 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPOFFER on 192.168.10.15 to 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPREQUEST for 192.168.10.15 (192.168.10.1) from 08:00:56:00:43:c3 via enp10s0
Jan 28 09:26:47 pddaq-gen02 dhcpd: DHCPACK on 192.168.10.15 to 08:00:56:00:43:c3 via enp10s0
=====================================
Setup an NFS share
https://www.howtoforge.com/nfs-server-and-client-on-centos-7
Add to /etc/exports:
/var/nfsshare 10.193.160.0/23(rw,sync,no_root_squash,no_all_squash)
/daq/rce/software.pd 10.193.160.0/23(rw,sync,no_root_squash,no_all_squash)
/daq/rce/proto-dune/software 10.193.160.0/23(rw,sync,no_root_squash,no_all_squash)
restart nfs as follows:
sudo exportfs -a ; sudo systemctl restart nfs
don't forget to make sure the firewall is off forever
sudo systemctl stop firewalld
sudo systemctl disable firewalld
======================================================
Setting up the /daq/rce/software/35ton/rceScripts/rce_talk scipt:
We need to tell rce_talk where the RCEs are….add the ip addresses to the code like this:
self.slot_map= {
###################
# CERN Defined RCEs
###################
( '192.168.1.2' ):'CERN-SM',
( '192.168.10.11' ):'CERN-100',
( '192.168.10.12' ):'CERN-102',
( '192.168.10.13' ):'CERN-110',
( '192.168.10.14' ):'CERN-112',
( '192.168.10.15' ):'CERN-120',
( '192.168.10.16' ):'CERN-122',
( '192.168.10.17' ):'CERN-130',
( '192.168.10.18' ):'CERN-132',
( '192.168.10.19' ):'CERN-DTM1'
}
...then you can do things like this:
[matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-102 check_host
.. done
[CERN-102]::stdout:: 4.0.0-xilinx-11503-gca893ab
[CERN-102]::stdout:: 23:38:41 up 6:38, 0 users, load average: 0.07, 0.03, 0.05
[CERN-102]::stdout:: root 2365 17.0 0.9 367232 9628 ? Sl 23:38 0:03 bin/rceServer
[CERN-102]::stdout:: Filesystem Size Used Avail Use% Mounted on
[CERN-102]::stdout:: /dev/root 7.8G 2.2G 5.3G 30% /
...
=========================================================
Once the RCE IPs and nfs server are set up, we can change the “axistreamdma.sh” script (which is run at RCE bootup) to mount nfs directory, install the
AxiStream driver on the RCE, and start the rceServer. The script is in the “software/35ton/rceData” directory and, for the CERN setup in buiding 4, should look like this:
[=matt@pddaq-gen02 35ton]$ cat rceData/axistreamdma.sh=
#!/bin/sh
mkdir -p /mnt/host
mount -t nfs 192.168.10.1:/daq/rce/software /mnt/host
insmod /mnt/host/35ton/AxiStreamDma/driverV3_4.00/AxiStreamDmaModule.ko cfgRxSize=2048,2048,327680,0 cfgRxCount=8,8,800,0 cfgTxCount=8,8,0,0 cfgRxAcp=0,0,1,0
chmod a+rw /dev/axi*
/mnt/host/35ton/rceScripts/start_server.csh
… this file needs to be put on each RCE at /bin/axistreamdma.sh, which you can do using rce_talk:
[matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-1 scp_put ../rceData/axistreamdma.sh /bin/axistreamdma.sh
Then, you can either reboot the COB (rce_talk CERN-SM powercycle_cob
) or just run the command using rce_talk
[matt@pddaq-gen02 rceScripts]$ python rce_talk CERN-1 ssh_cmd /bin/axistreamdma.sh
Check to see if rceServer is running on the RCEs using the check_host command with rce_talk.
NOTE!!! You have to do the same thing for the DTM...replace “CERN-1” with “CERN-DTM”
=====================================
To run the RCE GUI, run rce_talk CERN-1XX start_gui
To put the RCE in emulation mode…in commands tab, hit the “StartDebugFebEmu”.
=
Change trigger rate from DTM : edit defaults.xml -> = 0x61A800 =
-- this means 10 Hz. (wait time in clock cycles with 64 MHz clock)
Change DAQ destination : edit defaults.xml -> = 10.193.160.28=
-- KarolH - 2017-02-14