Installation and Configuration Guide via Puppet

This guide explains the installation and configuration procedure of DPM with puppet

Overview

Puppet is a configuration management system that can not only preserve and update the configuration of your components, but also manage their installation. That is why this guide is an integrated installation/configuration guide. As puppet modules are developed by a larger community, you can also manage other parts of your system, e.g. the yum configuration or the firewall settings, with puppet. In the following, we describe the settings and how they can be applied with puppet.

After the installation is completed please follow the guide on how to admin your DPM https://twiki.cern.ch/twiki/bin/view/DPM/DpmAdministration

DPM Terminology

please refer to DPM/DpmSetupManualInstallation#DPM_Terminology

Installing puppet

For all of this to work, puppet must be installed on the machines.

Supported (tested) puppet version

  • puppet 4 and 5 for DPM <= 1.13.x
  • puppet 5 and 6 for DPM 1.14.x

Puppet 5/6 installation

Add the puppet repository

# puppet 5 on SLC6
rpm -Uvh https://yum.puppet.com/puppet5-release-el-6.noarch.rpm
# puppet 6 on SLC6
rpm -Uvh https://yum.puppet.com/puppet6-release-el-6.noarch.rpm
# puppet 5 on CentOS7
rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
# puppet 6 on CentOS7
rpm -Uvh https://yum.puppet.com/puppet6-release-el-7.noarch.rpm
# puppet 6 on CentOS8
rpm -Uvh https://yum.puppet.com/puppet6-release-el-8.noarch.rpm

and install the puppet-agent package

yum install puppet-agent

Preconfiguration

Host Certificates

The x509 host certificate and key must be present in `/etc/grid-security/`.

Apart from getting the certificate from the CA, you'll probably end up having to convert the certificate from PKCS12 to PEM format, and to properly set the permissions.

# openssl pkcs12 -clcerts -nodes -in <cert> -out /etc/grid-security/hostkey.pem
# chmod 400 /etc/grid-security/hostkey.pem
# openssl pkcs12 -clcerts -nokeys -in <cert> -out /etc/grid-security/hostcert.pem
# chmod 444 /etc/grid-security/hostcert.pem

SELinux disabled

SELinux should be disabled on the system

cat /etc/sysconfig/selinux :

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

Firewall Configuration - iptables

Please also refer to the Manual Setup to see the Firewall configuration DPM/DpmSetupLegacy#Firewall_Configuration

For the local firewall you can also use the puppetlabs-firewall module:

(NOTE this is just for DPM <= 1.14) cp -r /usr/share/puppet/modules/firewalld /usr/share/dmlite/puppet/modules/
puppet module install puppetlabs-firewall

and add the following to your manifest for each port (range):

  firewall{"050 allow http and https":
    state  => "NEW",
    proto  => "tcp",
    dport  => [80, 443],
    action => "accept"
  }

It is possible to specify single ports, multiple ports, and also port ranges. The firewall rules are ordered by their names, so using a number at the beginning of the name is a clean way to sort them.

Firewall Configuration - firewalld

Please also refer to the Manual Setup to see the Firewall configuration DPM/DpmSetupLegacy#Firewall_Configuration

For the local firewall you can also use the puppetlabs-firewalld module:

puppet module install puppetlabs-firewalld

and add the following to your manifest in order to allow all the DPM protocols:


class { '::firewalld': }

firewalld::custom_service{'dpmsvc':
  short       => 'dpmsvc',
  description => 'DPM disk server',
  port        => [
    {
        'port'     => '80',
        'protocol' => 'tcp',
    },
    {
        'port'     => '443',
        'protocol' => 'tcp',
    },
    {
        'port'     => '1094',
        'protocol' => 'tcp',
    },
    {
        'port'     => '1095',
        'protocol' => 'tcp',
    },
    {
        'port'     => '2170',
        'protocol' => 'tcp',
    },
    {
        'port'     => '2811',
        'protocol' => 'tcp',
    },
    {
        'port'     => '5001',
        'protocol' => 'tcp',
    },
    {
        'port'     => '20000-25000',
        'protocol' => 'tcp',
    },
  ],

}

firewalld_service { 'DPM disk service':
    ensure  => 'present',
    service => 'dpmsvc',
    zone    => 'public',
  }

This example assumes that the default zone for firewalld is the 'public' zone.

puppet-dpm module

The puppet-dpm module has been developed to ease the set up of a DPM installation via puppet.

It can be used to set up different DPM installations :

  • DPM Headnode ( with or without a local DB)
  • DPM Disknode
  • DPM Head+Disk Node ( with or without a local DB)

Dependencies

It relies on several puppet modules, some of them developed @ CERN and some others available from third party.

The following modules are needed in order to use this module, and they are installed when using the dmlite-puppet-dpm package or from puppetforge

  • lcgdm-gridftp
  • lcgdm-dmlite
  • lcgdm-lcgdm
  • lcgdm-xrootd
  • lcgdm-voms
  • puppetlabs-stdlib
  • puppetlabs-mysql
  • saz-memcached
  • CERNOps-bdii
  • puppet-fetchcrl
  • puppetlabs-firewall

Installation & Upgrade

RPM

Starting from DPM 1.10.0 the puppet modules needed for the configuration are available in EPEL in a new package

yum install dmlite-puppet-dpm

this package will install the needed modules under /usr/share/dmlite/puppet/modules/.

In case of Puppet Infrastructure the installation need to be performed on the Puppet Master.

Puppetforge

puppet modules are also available via puppetforge

puppet module install lcgdm-dpm

in order to upgrade to a new version just type

puppet module upgrade lcgdm-dpm

but we encourage to use the rpm installations.

Puppetfile

We also provide a Puppetfile so to be used via r10k or bolt.

The latest Puppetfile is always available at

https://gitlab.cern.ch/lcgdm/dmlite/raw/master/src/puppet/Puppetfile

Usage

The module folder tests contains some examples, for instance you can set up a DPM box with both HEAD and DISK nodes with the following code snippet

class{'dpm::head_disknode':
   localdomain                    => 'cern.ch',
   db_user                        => 'dpmdbuser',
   db_pass                        => 'PASS',
   db_host                        => 'localhost',
   disk_nodes                     => ['<FQDN of this disk serrver 1>', '<FQDN of disk server 2>' .....],
   mysql_root_pass                => 'ROOTPASS',
   token_password                 => 'A32TO64CHARACTERA32TO64CHARACTER',
   xrootd_sharedkey               => 'ANOTHER32TO64CHARACTERA32TO64CHARACTER',
   site_name                      => 'CNR_DPM_TEST',
   volist                         => [dteam, lhcb],
   new_installation               => true,
   mountpoints                    => ['/srv/dpm','/srv/dpm/01'],
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                      => 'your headnode host cert DN'
}

the same parameters can be configured via hiera ( see the dpm::params class)

Having the code snippet saved in a file ( i.e. dpm.pp), then you just need to run:

For DPM >= 1.10.0 when using the dmlite-puppet-dpm package:

puppet apply  --modulepath /usr/share/dmlite/puppet/modules <your manifest>.pp 

for DPM < 1.10.0:

puppet apply <your manifest>.pp 

to have the DPM box installed and configured

Please note that it could be needed to run twice the puppet apply command in order to have all the changes correctly applied

Using Hiera

As said all parameters in the dpm::params class can be also included in a hiera configuration file without passing then to the class.

By default the hiera data can be stored in the folder /var/lib/hiera/ ( check https://docs.puppet.com/hiera/1/index.html for details)

The simplest way to use the configuration via hiera is to include the parameters inside the files

Puppet3:
/var/lib/hiera/defaults.yaml

Puppet4 and 5
/etc/puppetlabs/code/environments/production/hieradata/defaults.yaml 

The headnode configuration can be expressed as follows:

---
dpm::params::localdomain: "cern.ch"
dpm::params::token_password: "TOKEN_PASSWORD"
dpm::params::volist:
  - "dteam"
  - "lhcb"
dpm::params::dpmmgr_uid:  500
dpm::params::db_user: 'dpmdbuser'
dpm::params::db_pass: 'PASS'
dpm::params::db_host: 'localhost'
dpm::params::local_db:  true
dpm::params::mysql_root_pass: 'MYSQLROOT'
dpm::params::site_name: 'CNR_DPM_TEST'
dpm::params::new_installation: false
dpm::params::configure_dome: true
dpm::params::configure_domeadapter: true

dpm::params::host_dn: 'hostDN' </verbatim>

and the manifest just include this line:

include dpm::headnode

while for for the disknode configuration could be expressed as follows:

---
dmlite::disk::log_level: 1
dpm::params::debug: true
dpm::params::headnode_fqdn: "dpmhead01.cern.ch"
dpm::params::disk_nodes:
  - "localhost"
dpm::params::localdomain: "cern.ch"
dpm::params::token_password: "TOKEN_PASSWORD"
dpm::params::xrootd_sharedkey: "A32TO64CHARACTERKEYTESTTESTTESTTEST"
dpm::params::volist:
  - "dteam"
  - "lhcb"
dpm::params::dpmmgr_uid:  500
dpm::params::mountpoints:
  - "/data"
  - "/data/01"
dpm::params::configure_dome: true
dpm::params::configure_domeadapter: true
dpm::params::host_dn: 'hostDN'

and the manifest just include this line:

include dpm::disknode

In order to use that configuration file we should explicitly pass it to the puppet apply command

puppet apply disknode.pp --hiera_config /etc/hiera.yaml

Using Puppet Master

TO DO

Headnode

The Headnode configuration is performed via the dpm::headnode class or in case of an installation of a Head+Disk node via the dpm::head_disknode class

class{"dpm::headnode":
   localdomain                => 'cern.ch',
   db_user                    => 'dpmdbuser',
   db_pass                    => 'PASS',
   db_host                    => 'localhost',
   disk_nodes                 => ['dpm-disk01.cern.ch'],
   local_db                   => true,
   mysql_root_pass            => 'MYSQLROOT',
   token_password             => 'kwpoMyvcusgdbyyws6gfcxhntkLoh8jilwivnivel',
   xrootd_sharedkey           => 'A32TO64CHARACTERA32TO64CHARACTER',
   site_name                  => 'CNR_DPM_TEST',
   volist                     => [dteam, lhcb],
   new_installation           => true,
   pools                    => ['mypool:100M'], /*optional*/
   filesystems          => ["mypool:${fqdn}:/srv/dpm/01"], /*optional*/
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                    => 'your headnode host cert DN'
}

N.B. Each pool and filesystem specified in the pools and filesystems parameter should have the following syntax:

  • pools: 'poolname:defaultSize'
  • filesystems : 'poolname:servername:filesystem_path'

They are taken into account only if configure_default_pool and configure_default_filesystem are set to true. This conf is needed only if the admin would like to manage the filesystems/pools via puppet.

DB configuration

Depending on the DB installation ( local to the headnode or external ) there are different configuration parameters to set:

In case of a local installation the db_host parameter should be configured as localhost together with the local_db parameter set to true. While for an external DB installation the local_db parameter should be set to false.

N.B. the root DB grants for the Headnode should be added manually to the DB in case of an external DB installation:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'HEADNODE' IDENTIFIED BY 'MYSQLROOT' WITH GRANT OPTION;

N.B. In case of an upgrade of an existing DPM installation the new_installation parameter MUST be set to false

the mysql_override_options parameter can be used to override the mysql server configuration. In general the values provided by default by the module ( via the $dpm::params::mysql_override_options var ) should be fine.

The DB management via puppet can be switched off by setting the parameter db_manage to false. In this case every DB configuration should be performed manually.

Other configuration

  • configure_bdii : enabled/disabled the configuration of Resource BDII ( default = true)
  • configure_default_pool : create the pools specified in the pools paramter ( default = false)
  • configure_default_filesystem : create the filesytems specified in the filesystems parameter ( default = false)
  • admin_dn: the DN of one of the admins, needed to enable HTTP drain
  • local_map: it can be used to add to the /etc/lcgdm-mapfile extra entries. This is needed for instance to add the Headnode DN mapping needed for the HTTP drain.

see the Common Configuration section for the rest of configuration options

Disknode

The Disknode configuration is performed via the dpm::disknode class, as follows:

class{'dpm::disknode':
   headnode_fqdn                => "HEADNODE",
   disk_nodes                   => ['$::fqdn'],
   localdomain                  => 'cern.ch',
   token_password               => 'TOKEN_PASSWORD',
   xrootd_sharedkey             => 'A32TO64CHARACTERKEYTESTTESTTESTTEST',
   volist                       => [dteam, lhcb],
   mountpoints                  => ['/data','/data/01'], /*optional*/
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                     => 'your disknode host cert DN'
}

In particular the mountpoints var should include the mountpoint paths for the filesystems and the related parent folders.

N.B.It's not adivsed to use as storage mountpoints paths starting with /dpm as this is will create troubles to the WebDav frontend. If this is the case for your installation please contact the support.

See the Common Configuration section for the rest of configuration options

Common configuration

VO list and mapfile

Both Head and Disk nodes are configured by default with the list of the VOs supported and the configuration input to generate the mapfile.

Both the VOs and the gridmap configuration ( which is optional) can be disabled by using the following vars:

  • configure_gridmap : enable/disable the configuration of gridmap file ( default = true)
  • configure_vos : enable/disable the configuration of the VOs ( default = true)

The parameter volist is needed to specify the supported VOs, while the groupmap parameter specifies how to map VOMS users. By default the dteam VO mapping is given, an example for the whole LHC VOs mappings is as follows:

groupmap = {
  "vomss://voms2.cern.ch:8443/voms/atlas?/atlas"            => "atlas",
  "vomss://lcg-voms2.cern.ch:8443/voms/atlas?/atlas"      => "atlas",
  "vomss://voms2.cern.ch:8443/voms/cms?/cms"              => "cms", 
  "vomss://lcg-voms2.cern.ch:8443/voms/cms?/cms"        => "cms",
  "vomss://voms2.cern.ch:8443/voms/lhcb?/lhcb"              => "lhcb", 
  "vomss://lcg-voms2.cern.ch:8443/voms/lhcb?/lhcb"        => "lhcb",
  "vomss://voms2.cern.ch:8443/voms/alice?/alice"             => "alice", 
  "vomss://lcg-voms2.cern.ch:8443/voms/alice?/alice"      => "alice",
  "vomss://voms2.cern.ch:8443/voms/ops?/ops"               => "ops", 
  "vomss://lcg-voms2.cern.ch:8443/voms/ops?/ops"         => "ops",
  "vomss://voms.hellasgrid.gr:8443/voms/dteam?/dteam"  => "dteam",
  "vomss://voms2.hellasgrid.gr:8443/voms/dteam?/dteam"  => "dteam"
}

we can otherwise define the groupmap and the volist via hiera as follows:


dpm::params::volist:
  - "dteam"
  - "lhcb"

dpm::params::groupmap:
   vomss://voms.hellasgrid.gr:8443/voms/dteam?/dteam: 'dteam'
   vomss://voms2.hellasgrid.gr:8443/voms/dteam?/dteam: 'dteam'
   vomss://voms2.cern.ch:8443/voms/lhcb?/lhcb: 'lhcb'
   vomss://lcg-voms2.cern.ch:8443/voms/lhcb?/lhcb: 'lhcb'
   ...

N.B. The VOMS configuration of VO names with "." is not supported with this class (it will be ignored) therefore each vo of this type should be explicetly added to your manifest as follows:

voms{"voms::voname":}

and declared as a class like documented at https://forge.puppet.com/lcgdm/voms

The localmap parameter can be used to define a mapping which is not retrieved from the VOMS.

this will add the mapping to the file /etc/lcgdm-mapfile-local and when running the edg-mkgridmap script the mapping will be available on the /etc/lcgdm-mapfile file

Xrootd configuration

The basic Xrootd configuration requires only to specifies the xrootd_sharedkey, which should be a 32 to 64 char long string, the same for all the cluster.

In case VOMS integration is not requested by the supported VO, the support can be disabled ( enabled by default ) via the parameter xrootd_use_voms.

N.B. The VOMS integration should be enabled/disabled on all cluster at once.

In order to configure the Xrootd Federations and the Xrootd Monitoring via the parameter dpm_xrootd_fedredirs, xrd_report and xrd_monitor please refer to the DPM-Xrootd puppet guide for more details

https://twiki.cern.ch/twiki/bin/view/DPM/DPMComponents_Dpm-Xrootd#Puppet_Configuration

as any other configuration parameters, xrootd parameters can also be configured via hiera as follows:

dpm::params::xrootd_use_voms:false
dpm::params::dpm_xrootd_fedredirs:
   atlas:
      name: 'fedredir_atlas'
      fed_host: 'redirector.cern.ch'
      namelib_prefix: '/dpm/cern.ch/home/atlas'
      xrootd_port: 1094
      cmsd_port: 1098
      local_port: 11000
      namelib: 'XrdOucName2NameLFC.so pssorigin=localhost sitename=ATLAS_SITENAME'
      paths:
         - '/atlas'
dpm::params::xrootd_monitor: 'all user flush 30s fstat 60 lfn ops xfr 5 window 5s dest fstat info redir atlas-fax-eu-collector.cern.ch:9330'
dpm::params::xrd_report: 'atlas-fax-eu-collector.cern.ch:9331 every 60s all -buff -poll sync'

Other configuration

  • configure_repos : configure the yum repositories specified in the repos parameter ( default = false)
  • gridftp_redirect : enabled/disabled the GridFTP redirection functionality (default = false)
  • dpmmgr_user , dpmmgr_uid and dpmmgr_gid : the dpm user name , gid and uid ( default = dpmmgr, 151 and 151)
  • debug : enable/disable installation of the debuginfo packages ( default = false)

Advanced configuration

Here we collect some of the configuration steps neede to support use case like DPM-Argus, DPM Accounting and WLCG Storage Resource Reporting.

DPM Argus

DPM doesn't query Argus for each storage request, but instead use its user/group database where accounts can be flagged as banned. This is not perfect (realtime) solution, but it should satisfy general requirements:

Argus support has been desirable from EGI and WLCG viewpoints since many years, the main use case has always been the banning ("temporary suspension") of DNs. A response time of 6 hours (like for CRLs) would already be great, though 1 hour would be an improvement that should not cause any Argus services to be stressed.

dmlite-shell

Since dmlite 1.13.3 it is possible to use directly dmlite-shell to synchronize ban status with Argus server. This interface use directly Argus WSDL interface to obtain its configuration and doesn't depend on external packages. DPM dmlite-shell takes into account only subset of banning rules that applies on all resources and actions (execute pap-admin list-policies --action '.*' --resource '.*' --all on Argus server to see configuration considered by DPM) and ignore sections with permit rules. This should be sufficient for central banning and DPM accounts can be updated simply by

dmlite-shell -e 'userban * ARGUS_SYNC site-argus.example.com'
dmlite-shell -e 'groupban * ARGUS_SYNC site-argus.example.com'

DPM allows administrator to apply also LOCAL_BAN and users/groups banned locally are not modified by Argus ban status synchronization (only NO_BAN and ARGUS_BAN are considered). To be able to deny access for Argus banned users that never used DPM storage (their credential certificate subject is not in DPM database) ARGUS_SYNC also create DPM accounts for all banned users with ARGUS_BAN status.

Argus configuration access via WSDL is protected and it is necessary to add permissions for DPM headnode host certificate (certificate subject can be obtained by openssl x509 -noout -in /etc/grid-security/hostcert.pem -subject). Access permissions on Argus server must be added in /etc/argus/pap/pap_authorization.ini configuration file in the [dn] section, e.g.

[dn]
...
"/DC=your/CN=certificate/CN=subject/CN=for/CN=dpmhead.example.com" : POLICY_READ_LOCAL|POLICY_READ_REMOTE|CONFIGURATION_READ
...

For ARGUS_SYNC it is always necessary to specify Argus server, but it is possible to specify not just site-argus hostname. You can specify also port, subset of paps defined on Argus server in /etc/argus/pap/pap_configuration.ini and also for redundancy it is possible to specify list of Argus servers, e.g.

dmlite-shell -e 'userban * ARGUS_SYNC site-argus.example.com'
dmlite-shell -e 'userban * ARGUS_SYNC site-argus.example.com:8150'
dmlite-shell -e 'userban * ARGUS_SYNC site-argus.example.com:8150/default'
dmlite-shell -e 'userban * ARGUS_SYNC site-argus.example.com:8150/default+centralbanning'
dmlite-shell -e 'userban * ARGUS_SYNC site-argus1.example.com|site-argus2.example.com'
dmlite-shell -e 'userban * ARGUS_SYNC site-argus1.example.com:8150/a1+a2+a3|site-argus2.example.com:8150/a1+a2+a3'

Cron configuration can be automatically added by defining argus_banning headnode puppet configuration option, e.g.

clase {
    'dpm::headnode':
        ...
        argus_banning => 'site-argus.example.com',
        ...
}

Legacy DPM (DPM without DOME)

The DPM Argus integration is performed via a script which is available on the UMD repos ( due to some missing deps we cannot released it to EPEL)

The lcgdm module contains a class which can be used to configure the cron running the script

https://github.com/cern-it-sdc-id/puppet-lcgdm/blob/master/manifests/argus.pp

This class will be automatically used if you create similar configuration as mentioned in DOME case

class{'dpm::headnode':
  ...
  argus_banning  => 'https://argus_test.cern.ch:8154/authz',
  ...
}   

DPM Accounting

Space accounting for WLCG/EGI is implemented as well as a puppet module that adds cron configuration to execute this script daily:

class{'dpm::headnode':
  ... 
  configure_star => true,
  ...
}

Hierra can be also used to enable StAR accounting and in that case headnode site_name is automatically used also for EGI storage record "Site"

dpm::params::configure_star: true

This will install the needed packages ( APEL-SSM and deps) and install the cron

it's implemented in the puppet-dmlite module at

https://github.com/cern-it-sdc-id/puppet-dmlite/blob/develop/manifests/accounting.pp

WLCG Storage Resource Reporting

Publishing storagesummary.json with dpm-storage-summary.py is currently not configuratble via puppet and you have to follow manual instruction for SRR, but since dmlite 1.13.2 this information is automatically provided by CGI script that can be reached at https://dpmhead.example.com/static/srr. This means that manual configuration is necessary only for DPM installations that doesn't provide access with HTTP protocol (e.g. HTTPs port is blocked by firewall).

Upgrade / release notes

DPM 1.9.0

When moving to DPM 1.9.0 there are some changes to apply to the manifests

Legacy Flavour

No changes are needed

Dome Flavour

As a preliminary operation, the namespace folder counters need to be initialised to the correct numbers, see Enabling DOME for details.

New puppet configuration options:

configure_dome => true
configure_domeadapter => true

DPM >= 1.10.0

When moving to DPM >= 1.10.0 there are some changes to apply to the manifests

Legacy Flavour

No changes are needed

Dome Flavour

For DPM DOME enabled in v 1.9.0 upgrade should be straigforward otherwise you have to follow Enabling DOME instructions.

Conf changes to apply:

configure_dome => true
configure_domeadapter => true
host_dn  => 'your  host cert DN'

N.B The token_password parameter from version 1.10.0 MUST be now a string with more than 32 chars

Legacy-Free Dome Flavour

It's possible to move an existing Legacy installation to Legacy-free or just install from scratch a new DPM 1.10.0 Legacy-free

The Legacy installation packages are not automatically removed by puppet, so the following package have to be removed manually:

Headnode:

  • dpm-devel
  • dpm
  • dpm-python
  • dpm-rfio-server
  • dpm-server-mysql
  • dpm-srm-server-mysql
  • dpm-perl
  • dpm-name-server-mysql
  • dmlite-plugins-adapter
  • dmlite-plugins-mysql

Disknode:

  • dpm-devel
  • dpm
  • dpm-python
  • dpm-rfio-server
  • dpm-perl
  • dmlite-plugins-adapter

Then the change to apply to the conf is:

configure_legacy    => false

on both dpm::headnode and dpm::disknode classes ( or via hiera).

For gridftp transfer it's recommended to enable gridftp redirection in order to improve the performance

The change to apply to the conf is:

gridftp_redirect    => true

on both dpm::headnode and dpm::disknode classes ( or via hiera).

DPM 1.12.0

Starting from DPM 1.12.0 and the release of XrootD 4.9, 2 new features can be now enabled on DPM also via puppet

  • Support for X509 delegated credentials for XrootD TPC
  • Support for XrootD Checksums

X509 Delegation for XrootD TPC

Starting from version 4.9, XrootD clients are able to delegate X509 credentials to servers when performing the TPC.

In order to use those credentials and enable XrootD TPC via X509 the diskservers configuration needs to be updated.

In the dpm::disknode class just enable this parameter:

configure_dpm_xrootd_delegation =>   true

XrootD Checksums

XRootD checksums support need to be enabled on both head and disks.

In order to enable it you just need to add this parameter to the classes dpm::headnode and dpm::disknode

configure_dpm_xrootd_checksum =>   true

DPM 1.13.0

in DPM 1.13.0 the configure_dpm_xrootd_checksum and configure_dpm_xrootd_delegation are now eanbled by default.

Macaroon Secret

The following parameter has been added to the conf in order to configure the macaroon secret on the headnode.

on the dpm::headnode class or dpm::head_disknode

http_macaroon_secret =>   'your random macaroon secret of 64 chars'

XrootD TPC options

In addition the xrootd_tpc_options has been also added to configure the TPC options on the disknode,

on the dpm::disknode class

xrootd_tpc_options =>   'xrootd tpc options'

by default the option 'xfr 25' has been added, in order to raise the maximum number of XrootD TPC to 25 ( the default in XrootD is 9)

DPM 1.13.2

WLCG SRR publishing via CGI

SRR is automatically published with HTTPS protocols by default. This can be currently modified only by setting hiera dmlite::dav::params::enable_srr_cgi value to false

DPM 1.14.0

Dropped support for puppet 4 - upgrade to supported (tested) version 5 or 6. GridFTP redirection gridftp_redirect is now enabled by default, because otherwise transfered data for gsiftp:// protocol would get tunneled through DPM headnode. This configuration is still compatible with legacy SRM, but gsiftp:// transfers also rely on GridFTP redirection. Be aware that client must also support this feature (uberftp, lcg-cp doesn't support redirection, globus-url-copy must be called with -dp parameter) otherwise data gets tunneled through DPM headnode.

Argus ban status synchronization

Sychronize user banned (centrally) by Argus, see DPM Argus section for details

OpenID -Connect and WLCG bearer tokens

The following parameter has been added to the conf in order to configure the OIDC on the dpm::headnode class or dpm::head_disknode

oidc_clientid         => '< The OIDC Client ID for this service >',
oidc_clientsecret     => '< The OIDC Client Secret for this service >',
oidc_passphrase       => '< The OIDC crypto passphrase >',
oidc_allowissuer      => ['"/dpm/domain.org/home/wlcg" "https://wlcg.cloud.cnaf.infn.it/" wlcg', '"/dpm/domain.org/home/cms" "https://cms-auth.web.cern.ch/" cms'],
oidc_allowaudience    => ['https://wlcg.cern.ch/jwt/v1/any'],

For details see documentation for DPM OIDC manual configuration

DPM 1.15.0

Build on top of XRootD 5.x and provides support for CentOS8 (+ python3 for all scripts).

Compatibility

The module can configure a DPM on RHL 6 and 7 system compatible

Latest DPM 1.14 was tested with puppet 5 and 6

Mysql 5.1 and 5.5 are supported as well as MariaDB 5.5 on RHL 7

Known Issues

fetchcrl module name change

Starting from puppet-dpm v 4.8 one of the dependency ( CERNops-fetchcrl ) has been renamed to puppet-fetchcrl , therefore upgrading to this version or newer will fail cause of name clashing, with an error like:

Notice: Downloading from https://forgeapi.puppetlabs.com ...
Error: Could not install module 'puppet-fetchcrl' (latest)
  Installation would overwrite /etc/puppet/modules/fetchcrl
    Currently, 'CERNOps-fetchcrl' (v1.0.0) is installed to that directory
    Use `puppet module install --force` to install this module anyway

You need first to remove the CERNOps-fetchcrl before proceeding the upgrade

puppet module uninstall --force CERNOps-fetchcrl

voms-clients package name

In case the installation is performed using the UMD repositories, there the package name of the voms-client is still using the and old name ( voms-clients) different from the one available in EPEL (voms-clients-cpp)

For this reason the installation of the voms clients will fail. To fix this just add to your hiera file:

voms::install::clientpkgs:
 - voms-clients

Contributing

Some of the Puppet modules needed by DPM are developed by the DPM team with great help from the community:

  • lcgdm-dpm
  • lcgdm-gridftp
  • lcgdm-dmlite
  • lcgdm-lcgdm
  • lcgdm-xrootd

the source code is available within the dmlite source tree at

https://gitlab.cern.ch/lcgdm/dmlite/tree/develop/src/puppet

In order to contribute to those modules, please open a PR in the CERN Gitlab ( A CERN account is needed), or alternately contact dpm-devel@cernNOSPAMPLEASE.ch to send a patch.

The lcgdm-voms module code is available at

https://github.com/HEP-Puppet/puppet-voms/

and it's maintained by the HEP admins community.

The rest of the modules are taken from puppetforge

  • puppetlabs-stdlib
  • puppetlabs-mysql
  • saz-memcached
  • CERNOps-bdii
  • puppet-fetchcrl
  • puppetlabs-firewall
Edit | Attach | Watch | Print version | History: r56 < r55 < r54 < r53 < r52 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r56 - 2020-10-28 - PetrVokac
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    DPM All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback