Using Linux at CERN

Network (landb) - https://network.cern.ch/

cvmfs - <a href="https://cernvm.cern.ch/portal/filesystem/quickstart" rel="noopener noreferrer" target="_blank">https://cernvm.cern.ch/portal/filesystem/quickstart</a>

Web Proxy

Python Virtual Environment

https://virtualenv.pypa.io/en/stable/

[np04daq@np04-srv-009 ~]$ source /nfs/sw/py/bin/activate
(py)[np04daq@np04-srv-009 ~]$ which python
/nfs/sw/py/bin/python
(py)[np04daq@np04-srv-009 ~]$ deactivate

[root@np04-srv-009 dsavage]# yum install python-virtualenv
Loaded plugins: changelog, fastestmirror, kernel-module, protectbase, tsflags,
              : versionlock
base                                                     | 3.6 kB     00:00     
cern                                                     | 4.1 kB     00:00     
elrepo                                                   | 2.9 kB     00:00     
epel                                                     | 4.7 kB     00:00     
extras                                                   | 3.4 kB     00:00     
updates                                                  | 3.8 kB     00:00     
Loading mirror speeds from cached hostfile
160 packages excluded due to repository protections
Package python-virtualenv-1.10.1-4.el7.noarch already installed and latest version
Nothing to do

Remove Graphical Services

This has caused boot problems. I think this point is reached by installing servers without graphics turned on then using one of the servers to run a graphical interface after installing manually.

[root@np04-srv-009 log]# systemctl disable initial-setup-graphical
Removed symlink /etc/systemd/system/multi-user.target.wants/initial-setup-graphical.service.
Removed symlink /etc/systemd/system/graphical.target.wants/initial-setup-graphical.service.

Data Disk Driver Configuraitons

Dear Geoff,
Thanks to Wainer and the testing session we just had, I think that we can now prepare the “final" configuration of the storage servers.
Please find enclosed a summary of our tests.

1) the oflag=dsync option of dd artificially lowers the performance, forcing a synchronisation that is not really needed. Therefore, just changing the way we did measurements (oconv=fsync) already brought the performance up by 25% (400 MB/s -> 500 MB/s)

2) We increased progressively the number of threads in /sys/block/md*/md/group_thread_cnt and found a reasonable plateau at 4.
~> for i in  /sys/block/md*/md/group_thread_cnt; do echo 4 > $i;done

3) We reduced the dirty_background_ratio and dirty_ratio, in order to reduce RAM utilisation (this may be tuned once we know better what will run on those nodes)
~> echo 1 > /proc/sys/vm/dirty_background_ratio
~> echo 2 > /proc/sys/vm/dirty_ratio
 
4) We set the read_ahead to 65536
~> for i in `seq 0 3`; do blockdev — setra 65536 /dev/md$i ; done

5) we increased the min/max sync speeds:
~> echo 50000 > /proc/sys/dev/raid/speed_limit_min
~> echo 5000000 > /proc/sys/dev/raid/speed_limit_max

As a result we get ~2.5 GB/s sequential write performance which is only affected partially by reading (we get ~4GB/s sequential read perf).

Last thing to decide is if we bother using RAID6 or we are happy with RAID5. 
RAID 5 will give us a small performance increase (5-10%) as well as disk space increase (8%), while RAID 6 allows us to not loose data even if a disk breaks while we are recovering from a disk failure. Since disks don’t die like flies and our data are meant to be short-lived anyway, I would tend to go for RAID5.

The configuration has 4 independent devices with 12 disks each (one of which declared as spare, such that rebuilding will start without human intervention).

Just for reference, the creation command can be (for 1 device):
~> mdadm —create —verbose /dev/md0 —level=5 —raid-devices=11 /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj /dev/sdak —spare-devices=1 /dev/sdal

~> mkfs.xfs /dev/md0

~> mount /dev/md0 /data0

In /etc/mdadm.conf we should specify an email address to get notified of any failures and make sure that the mdmonitor.service is running correctly.
Example:
[root@np04-srv-002 ~]# cat /etc/mdadm.conf
MAILADDR giovanna.lehmann@cern.ch


Ciao
Giovanna

Update Kernel on np04-srv-002-ctrl

http://elrepo.org/tiki/kernel-ml

[root@np04-srv-002 ~]# yum list --enablerepo=elrepo-kernel | grep kernel
Loaded plugins: changelog, fastestmirror, kernel-module, protectbase, tsflags,
----- snip -----
kernel-ml.x86_64                        4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-devel.x86_64                  4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-doc.noarch                    4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-headers.x86_64                4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-tools.x86_64                  4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-tools-libs.x86_64             4.14.4-1.el7.elrepo            elrepo-kernel
kernel-ml-tools-libs-devel.x86_64       4.14.4-1.el7.elrepo            elrepo-kernel
----- snip -----

Investigate np04-srv-009

Had a corrupted disk. Repeated fscks

  • Wed 29-Nov-2017 - Run the consistency check in the bios. The disks are raided in the bios.

Hints from Pat Riehky

System Disks

np04-srv-007 and 008

[dsavage@np04-srv-009 ~]$ lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   931G  0 disk 
&#9500;&#9472;sda1                       8:1    0     1G  0 part /boot
&#9492;&#9472;sda2                       8:2    0   930G  0 part 
  &#9500;&#9472;cc_np04--srv--009-root 253:0    0    50G  0 lvm  /
  &#9500;&#9472;cc_np04--srv--009-swap 253:1    0   7.8G  0 lvm  [SWAP]
  &#9492;&#9472;cc_np04--srv--009-home 253:2    0 872.2G  0 lvm  
sr0                         11:0    1  1024M  0 rom  

np04-srv-011 to 016

[dsavage@np04-srv-011 ~]$ lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 279.4G  0 disk 
&#9500;&#9472;sda1                       8:1    0     1G  0 part /boot
&#9492;&#9472;sda2                       8:2    0 278.4G  0 part 
  &#9500;&#9472;cc_np04--srv--011-root 253:0    0    50G  0 lvm  /
  &#9500;&#9472;cc_np04--srv--011-swap 253:1    0    28G  0 lvm  [SWAP]
  &#9492;&#9472;cc_np04--srv--011-home 253:2    0 200.5G  0 lvm  /home
sr0                         11:0    1  1024M  0 rom  

[dsavage@np04-srv-018 ~]$ lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                             8:0    0 279.4G  0 disk 
&#9500;&#9472;sda1                          8:1    0     1G  0 part /boot
&#9492;&#9472;sda2                          8:2    0 278.4G  0 part 
  &#9500;&#9472;cc_np04--srv--018-root    253:0    0    50G  0 lvm  /
  &#9500;&#9472;cc_np04--srv--018-swap    253:1    0    28G  0 lvm  [SWAP]
  &#9500;&#9472;cc_np04--srv--018-home    253:2    0    10G  0 lvm  /home
  &#9500;&#9472;cc_np04--srv--018-scratch 253:3    0    25G  0 lvm  /scratch
  &#9492;&#9472;cc_np04--srv--018-log     253:4    0 165.5G  0 lvm  /log
sr0                            11:0    1  1024M  0 rom  

np04-srv-001 to 004

[root@np04-srv-001 ~]# lsblk
NAME                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                   8:0    0 894.3G  0 disk  
&#9500;&#9472;sda1                8:1    0  1000M  0 part  
&#9474; &#9492;&#9472;md126             9:126  0  1000M  0 raid1 /boot
&#9492;&#9472;sda2                8:2    0 893.3G  0 part  
  &#9492;&#9472;md127             9:127  0 893.2G  0 raid1 
    &#9500;&#9472;sysvg-root    253:0    0  48.8G  0 lvm   /
    &#9500;&#9472;sysvg-swap    253:1    0     4G  0 lvm   [SWAP]
    &#9500;&#9472;sysvg-scratch 253:2    0 274.1G  0 lvm   /scratch
    &#9500;&#9472;sysvg-log     253:3    0 274.1G  0 lvm   /log
    &#9492;&#9472;sysvg-home    253:4    0   9.8G  0 lvm   /home
sdb                   8:16   0 894.3G  0 disk  
&#9500;&#9472;sdb1                8:17   0  1000M  0 part  
&#9474; &#9492;&#9472;md126             9:126  0  1000M  0 raid1 /boot
&#9492;&#9472;sdb2                8:18   0 893.3G  0 part  
  &#9492;&#9472;md127             9:127  0 893.2G  0 raid1 
    &#9500;&#9472;sysvg-root    253:0    0  48.8G  0 lvm   /
    &#9500;&#9472;sysvg-swap    253:1    0     4G  0 lvm   [SWAP]
    &#9500;&#9472;sysvg-scratch 253:2    0 274.1G  0 lvm   /scratch
    &#9500;&#9472;sysvg-log     253:3    0 274.1G  0 lvm   /log
    &#9492;&#9472;sysvg-home    253:4    0   9.8G  0 lvm   /home
sdc                   8:32   0   5.5T  0 disk  
sdd                   8:48   0   5.5T  0 disk  
sde                   8:64   0   5.5T  0 disk  
sdf                   8:80   0   5.5T  0 disk  
sdg                   8:96   0   5.5T  0 disk  
sdh                   8:112  0   5.5T  0 disk  
sdi                   8:128  0   5.5T  0 disk  
sdj                   8:144  0   5.5T  0 disk  
sdk                   8:160  0   5.5T  0 disk  
sdl                   8:176  0   5.5T  0 disk  
sdm                   8:192  0   5.5T  0 disk  
sdn                   8:208  0   5.5T  0 disk  
sdo                   8:224  0   5.5T  0 disk  
sdp                   8:240  0   5.5T  0 disk  
sdq                  65:0    0   5.5T  0 disk  
sdr                  65:16   0   5.5T  0 disk  
sds                  65:32   0   5.5T  0 disk  
sdt                  65:48   0   5.5T  0 disk  
sdu                  65:64   0   5.5T  0 disk  
sdv                  65:80   0   5.5T  0 disk  
sdw                  65:96   0   5.5T  0 disk  
sdx                  65:112  0   5.5T  0 disk  
sdy                  65:128  0   5.5T  0 disk  
sdz                  65:144  0   5.5T  0 disk  
sdaa                 65:160  0   5.5T  0 disk  
sdab                 65:176  0   5.5T  0 disk  
sdac                 65:192  0   5.5T  0 disk  
sdad                 65:208  0   5.5T  0 disk  
sdae                 65:224  0   5.5T  0 disk  
sdaf                 65:240  0   5.5T  0 disk  
sdag                 66:0    0   5.5T  0 disk  
sdah                 66:16   0   5.5T  0 disk  
sdai                 66:32   0   5.5T  0 disk  
sdaj                 66:48   0   5.5T  0 disk  
sdak                 66:64   0   5.5T  0 disk  
sdal                 66:80   0   5.5T  0 disk  
sdam                 66:96   0   5.5T  0 disk  
sdan                 66:112  0   5.5T  0 disk  
sdao                 66:128  0   5.5T  0 disk  
sdap                 66:144  0   5.5T  0 disk  
sdaq                 66:160  0   5.5T  0 disk  
sdar                 66:176  0   5.5T  0 disk  
sdas                 66:192  0   5.5T  0 disk  
sdat                 66:208  0   5.5T  0 disk  
sdau                 66:224  0   5.5T  0 disk  
sdav                 66:240  0   5.5T  0 disk  
sdaw                 67:0    0   5.5T  0 disk  
sdax                 67:16   0   5.5T  0 disk  

Wipe Disks

Boot into rescue mode. In landb change remove the assign IP address at boot option. A reboot now does dhcp and provides a list of installation instances. One of these is rescue.

Hit ctrl-D to issue commands

dd if=/dev/zero of=/dev/sda bs=1M status=progress

Logical Volume Management

  • lsblk
  • pvs
  • lvs
  • vgs

disk partitions for quad servers

The following example uses two serial ATA disks (/dev/sda and /dev/sdb) with four partitions (/boot, /, swap and /tmp), each in software RAID-1 configuration:
part raid.01 --size=128  --ondisk=sda
part raid.02 --size=8192 --ondisk=sda
part raid.03 --size=3072 --ondisk=sda
part raid.04 --size=512  --ondisk=sda

part raid.05 --size=128  --ondisk=sdb
part raid.06 --size=8192 --ondisk=sdb
part raid.07 --size=3072 --ondisk=sdb
part raid.08 --size=512  --ondisk=sdb

raid /boot --level=RAID1 --device=md0 --fstype=ext2 raid.01 raid.05
raid /     --level=RAID1 --device=md1 --fstype=ext3 raid.02 raid.06
raid swap  --level=RAID1 --device=md2 --fstype=swap raid.03 raid.07
raid /tmp  --level=RAID1 --device=md3 --fstype=ext3 raid.04 raid.08

The following example uses two serial ATA disks (/dev/sda and /dev/sdb) with four partitions (/boot, /, swap and /tmp), each in software RAID-1 configuration:
part raid.01 --size=128  --ondisk=sda
part raid.02 --size=8192 --ondisk=sda
part raid.03 --size=3072 --ondisk=sda
part raid.04 --size=512  --ondisk=sda

part raid.05 --size=128  --ondisk=sdb
part raid.06 --size=8192 --ondisk=sdb
part raid.07 --size=3072 --ondisk=sdb
part raid.08 --size=512  --ondisk=sdb

raid /boot --level=RAID1 --device=md0 --fstype=ext2 raid.01 raid.05
raid /     --level=RAID1 --device=md1 --fstype=ext3 raid.02 raid.06
raid swap  --level=RAID1 --device=md2 --fstype=swap raid.03 raid.07
raid /tmp  --level=RAID1 --device=md3 --fstype=ext3 raid.04 raid.08

np04-srv-003

[root@np04-srv-003 by-path]# ls
pci-0000:00:1f.2-ata-1.0        pci-0000:00:1f.2-ata-1.0-part2
pci-0000:00:1f.2-ata-1.0-part1  pci-0000:00:1f.2-ata-2.0

[root@np04-srv-003 by-path]# df
Filesystem                         1K-blocks    Used Available Use% Mounted on
/dev/mapper/cc_np04--srv--003-root  51474912 1470820  47366268   4% /
devtmpfs                            65861144       0  65861144   0% /dev
tmpfs                               65872148       0  65872148   0% /dev/shm
tmpfs                               65872148   17440  65854708   1% /run
tmpfs                               65872148       0  65872148   0% /sys/fs/cgroup
/dev/sda1                             999320  144304    786204  16% /boot
/dev/mapper/cc_np04--srv--003-home 866074296   77872 821979172   1% /home
tmpfs                               13174432       0  13174432   0% /run/user/0
[root@np04-srv-003 by-path]# pvs
  PV         VG              Fmt  Attr PSize   PFree
  /dev/sda2  cc_np04-srv-003 lvm2 a--  893.25g    0 
[root@np04-srv-003 by-path]# lvs
  LV   VG              Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home cc_np04-srv-003 -wi-ao---- 839.25g                                                    
  root cc_np04-srv-003 -wi-ao----  50.00g                                                    
  swap cc_np04-srv-003 -wi-ao----   4.00g                                                    
[root@np04-srv-003 by-path]# vgs
  VG              #PV #LV #SN Attr   VSize   VFree
  cc_np04-srv-003   1   3   0 wz--n- 893.25g    0 

np04-srv-008

[root@np04-srv-008 ~]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/cc_np04--srv--008-root   50G  7.0G   44G  14% /
devtmpfs                            7.8G     0  7.8G   0% /dev
tmpfs                               7.8G     0  7.8G   0% /dev/shm
tmpfs                               7.8G   17M  7.8G   1% /run
tmpfs                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2                           976M  184M  726M  21% /boot
/dev/mapper/cc_np04--srv--008-home   11T   33M   11T   1% /home
AFS                                 8.6G     0  8.6G   0% /afs
tmpfs                               1.6G     0  1.6G   0% /run/user/0

[root@np04-srv-008 ~]# umount -v /home
umount: /home (/dev/mapper/cc_np04--srv--008-home) unmounted

[root@np04-srv-008 ~]# fsck /home
fsck from util-linux 2.23.2
If you wish to check the consistency of an XFS filesystem or
repair a damaged filesystem, see xfs_repair(8).

[root@np04-srv-008 ~]# lvs
  LV   VG              Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home cc_np04-srv-008 -wi-a----- 10.86t                                                    
  root cc_np04-srv-008 -wi-ao---- 50.00g                                                    
  swap cc_np04-srv-008 -wi-ao----  7.81g 

[root@np04-srv-008 dev]# pvs
  PV         VG              Fmt  Attr PSize  PFree 
  /dev/sda3  cc_np04-srv-008 lvm2 a--  10.91t 10.86t


nfs

  • client command - sudo mount pddaq-gen05-daq0:/daq/artdaq /daq/artdaq
  • server commands
[dsavage@pddaq-gen05 ~]$ cat /etc/exports
/daq/artdaq    10.73.136.0/16(rw,sync,no_root_squash,no_all_squash)
#/daq/artdaq    10.193.0.0/16(rw,sync,no_root_squash,no_all_squash)

# restart nfs as follows:
#    sudo exportfs -a ; sudo systemctl restart nfs
# don't forget to make sure the stupid firewall is off forever
#    sudo systemctl stop firewalld
#    sudo systemctl disable firewalld
  • 19/07/2017 nfs server on pddaq-gen05-daq0 came up with no issues. Just had to mount on the clients.
  • sudo mount pddaq-gen05-daq0:/daq/artdaq /daq/artdaq

nfs servers @ EHN1

[dsavage@np04-srv-007 ~]$ df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/cc_np04--srv--007-root   50G  6.6G   44G  14% /
devtmpfs                            7.8G     0  7.8G   0% /dev
tmpfs                               7.8G   84K  7.8G   1% /dev/shm
tmpfs                               7.8G  9.0M  7.8G   1% /run
tmpfs                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2                           976M  145M  765M  16% /boot
/dev/mapper/cc_np04--srv--007-home   11T   33M   11T   1% /home
AFS                                 8.6G     0  8.6G   0% /afs
tmpfs                               1.6G   16K  1.6G   1% /run/user/42
tmpfs                               1.6G     0  1.6G   0% /run/user/0
tmpfs                               1.6G     0  1.6G   0% /run/user/1000

[dsavage@np04-srv-007 ~]$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 10.9T  0 disk 
&#9500;&#9472;sda1                       8:1    0    1M  0 part 
&#9500;&#9472;sda2                       8:2    0    1G  0 part /boot
&#9492;&#9472;sda3                       8:3    0 10.9T  0 part 
  &#9500;&#9472;cc_np04--srv--007-root 253:0    0   50G  0 lvm  /
  &#9500;&#9472;cc_np04--srv--007-swap 253:1    0  7.8G  0 lvm  [SWAP]
  &#9492;&#9472;cc_np04--srv--007-home 253:2    0 10.9T  0 lvm  /home
sr0                         11:0    1 1024M  0 rom  

[dsavage@np04-srv-007 ~]$ lsblk -d
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda    8:0    0 10.9T  0 disk 
sr0   11:0    1 1024M  0 rom  

kickstart

  • encrypt root password - openssl passwd -1 password here

np04 hostgroup

Unix Groups

AFS does not use the UNIX User+Group information associated with files for anything, in particular not for access control (this is what AFS ACLs are for - see KB0000451 and the "fs la" command). The information is recorded as presented by the operating system at the time of writing. As such, changing the UNIX group on files and directories is a purely cosmetic operation, but is restricted in order to not "give away" files (consider the download of suspicious files in a shared area; changing the owning user might implicate an innocent party).

You could run this cleanup (on a particular AFS subtree you own) via "afs_admin set_owner" command, this is a CERN-specific extension.

EOS does indeed use the UNIX group information for access control - see the "eos chown" command for how to change the user/group information.

[dsavage@lxplus085 ~]$ afs_admin set_owner -r dsavage:np-comp . will execute a recursive chown dsavage:np-comp /afs/cern.ch/user/d/dsavage/.

Group Account

  • pdtstusr - associated with my account

E-Groups

  • eos-experiment-cenf-np04-daq-readers
  • eos-experiment-cenf-np04-daq-writers
  • eos-experiment-cenf-np04-daq-admin
  • CENF-Computing
  • np-comp
  • eos-experiment-cenf-general-readers
  • eos-experiment-cenf-general-writers
  • eos-experiment-cenf-np04-readers
  • eos-experiment-cenf-np04-writers
  • np4-test-daq-usr

Computer Inventory

  • pddaq-gen01-ctrl0
  • pddaq-gen02-ctrl0
  • pddaq-gen03
  • pddaq-gen04
  • pddaq-gen05
  • pddaq-gen06

Virtual Machines

  • pdvmtest01
  • pdvmtest02

Laptops

  • fnalnp4107370
  • fnalnp4117018

OS Installation

[root@pdvmtest02 ~]# locmap --list

[Available Modules]
   * Module name : sudo[enabled]
   * Module name : sendmail[enabled]
   * Module name : cernbox[disabled]
   * Module name : ntp[enabled]
   * Module name : gpg[enabled]
   * Module name : cvmfs[disabled]
   * Module name : ssh[enabled]
   * Module name : lpadmin[enabled]
   * Module name : nscd[enabled]
   * Module name : kerberos[enabled]
   * Module name : eosclient[disabled]
   * Module name : afs[enabled]

Software

Ganglia

https://github.com/ganglia/monitor-core/wiki/Ganglia-Quick-Start#Quick_start_guide

  • Copied gmetad.conf and gmond.conf to /etc from /etc/ganglia using sudo and modified using sudo.
  • sudo service gmetad start
  • sudo service gmond start
https://cdcvs.fnal.gov/redmine/projects/ds50daq/wiki/Enabling_Ganglia_monitoring_on_the_WH14NE_teststand

As root...

[dsavage@pddaq-gen05 ~]$ yum list | grep ganglia
ganglia.x86_64 3.7.2-2.el7 @epel
ganglia-devel.x86_64 3.7.2-2.el7 epel
ganglia-gmetad.x86_64 3.7.2-2.el7 epel
ganglia-gmond.x86_64 3.7.2-2.el7 epel
ganglia-gmond-python.x86_64 3.7.2-2.el7 epel
ganglia-web.x86_64 3.7.1-2.el7 epel
libnodeupdown-backend-ganglia.x86_64 1.14-8.el7 epel
nordugrid-arc-gangliarc.noarch 1.0.1-1.el7 epel
pcp-import-ganglia2pcp.x86_64 3.11.3-4.el7 base

[dsavage@pddaq-gen05 ~]$ yum list | grep gmetad
ganglia-gmetad.x86_64 3.7.2-2.el7 epel

Configuration Management

Create a puppet managed vm.

  • Requirements
    • Signed up for two factor authentication in building 55, service desk.
    • E-Group subscriptions - ai-admins, ai-admins-crm, ai-playground.
    • Admin cluster access instructions - https://cern.service-now.com/service-portal/article.do?n=KB0000765.
      • Join LxAdm-Authorized-Users egroup.
      • Can have another egroup added.
      • Login to aiadm cluster directly from my laptop. (Login from lxplus did not work.)

git clone https://:@gitlab.cern.ch:8443/ai/it-puppet-hostgroup-playground.git

ai-bs-vm --foreman-hostgroup playground/dsavage \
                           --foreman-environment qa \
                           --cc7 \
                           training-dsavage.cern.ch

openstack server show training-dsavage

git pull --rebase origin qa && git push origin qa

lm_sensors

Now follows a summary of the probes I have just done.
Just press ENTER to continue: 

Driver `to-be-written':
  * ISA bus, address 0xe4
    Chip `IPMI BMC BT' (confidence: 8)

Driver `coretemp':
  * Chip `Intel digital thermal sensor' (confidence: 9)

Note: there is no driver for IPMI BMC BT yet.
Check http://www.lm-sensors.org/wiki/Devices for updates.

Do you want to overwrite /etc/sysconfig/lm_sensors? (YES/no): 
Unloading i2c-dev... OK

Port Forwarding

Use port forwarding to use the web browser on my laptop instead of a web browser running on linux whose display must use X11 on my laptop for display.

To see artdaq database.

  • On my laptop - sudo ssh dsavage@np04-srv-010 -L 8880:localhost:8880 -N
  • In browser - http://localhost:8880/db/client.html

To see HP switch in DAQ rack.

SSH Port Forwarding/SOCKS

SSH tunnelling to network.cern.ch does work, and there is no restriction on that. The problem is that the server will always try to redirect the browser to network.cern.ch, therefore you will not be able to access any pages using local SSH port forwarding.

Instead, you should use dynamic port forwarding, running SSH as a SOCKS proxy. You will find many examples on the Internet, but basically all you have to do is run ssh -D 2001 username@lxplusNOSPAMPLEASE.cern.ch in a terminal and then configure your web browser to use localhost as SOCKS proxy (2001 is just an example, you can use any port you like). In Firefox, this can be done by going to "Preferences > Advanced > Network" and clicking on "Connection Settings". In the pop-up window, select the "Manual proxy configuration" and "SOCKS v5" checkboxes and then enter 127.0.0.1 in the "SOCKS Host" field, and 2001 the "Port" field. After doing this, all connections from your web browser will be tunnelled through lxplus, effectively letting you access any website from within the CERN network.

Regarding your original request, please note that an opening in the Main CERN Firewall will make your server accessible from anywhere on the Internet, therefore making it vulnerable to malicious attacks (this is why such a request must be first approved by the Computer Security team). If your goal is to access your web service from lxplus and lxbatch, then you do not need to request an opening in the Main CERN Firewall. Since your VMs are already on the same network as lxplus and lxbatch, all you need to do is to manually open the required ports for incoming connections on your VMs.

-- DavidGeoffreySavage - 2017-04-12

Set EDITMETHOD = wysiwyg

Edit | Attach | Watch | Print version | History: r45 < r44 < r43 < r42 < r41 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r45 - 2018-05-23 - DavidGeoffreySavage
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback