ALERT! EOL Warning ALERT!: Support for this software ends in the summer 2024, consider dCache migration (EGI provides help with migration till summer 2023).

Installation and Configuration Guide via Puppet

This guide explains the installation and configuration procedure of DPM with puppet


Puppet is a configuration management system that can not only preserve and update the configuration of your components, but also manage their installation. That is why this guide is an integrated installation/configuration guide. As puppet modules are developed by a larger community, you can also manage other parts of your system, e.g. the yum configuration or the firewall settings, with puppet. In the following, we describe the settings and how they can be applied with puppet.

After the installation is completed please follow the guide on how to admin your DPM

DPM Terminology

please refer to DPM/DpmSetupManualInstallation#DPM_Terminology

Installing puppet

For all of this to work, puppet must be installed on the machines.

Supported (tested) puppet version

  • puppet 4 and 5 for DPM <= 1.13.x
  • puppet 5 and 6 for DPM 1.14.x

Puppet 5/6 installation

Add the puppet repository

# puppet 5 on SLC6
rpm -Uvh
# puppet 6 on SLC6
rpm -Uvh
# puppet 5 on CentOS7
rpm -Uvh
# puppet 6 on CentOS7
rpm -Uvh
# puppet 6 on CentOS8
rpm -Uvh

and install the puppet-agent package

yum install puppet-agent


Host Certificates

The x509 host certificate and key must be present in `/etc/grid-security/`.

Apart from getting the certificate from the CA, you'll probably end up having to convert the certificate from PKCS12 to PEM format, and to properly set the permissions.

# openssl pkcs12 -clcerts -nodes -in <cert> -out /etc/grid-security/hostkey.pem
# chmod 400 /etc/grid-security/hostkey.pem
# openssl pkcs12 -clcerts -nokeys -in <cert> -out /etc/grid-security/hostcert.pem
# chmod 444 /etc/grid-security/hostcert.pem

SELinux disabled

SELinux should be disabled on the system

cat /etc/sysconfig/selinux :

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.

Firewall Configuration - iptables

Please also refer to the Manual Setup to see the Firewall configuration DPM/DpmSetupLegacy#Firewall_Configuration

For the local firewall you can also use the puppetlabs-firewall module:

puppet module install puppetlabs-firewall

and add the following to your manifest for each port (range):

  firewall{"050 allow http and https":
    state  => "NEW",
    proto  => "tcp",
    dport  => [80, 443],
    action => "accept"

It is possible to specify single ports, multiple ports, and also port ranges. The firewall rules are ordered by their names, so using a number at the beginning of the name is a clean way to sort them.

Firewall Configuration - firewalld

Please also refer to the Manual Setup to see the Firewall configuration DPM/DpmSetupLegacy#Firewall_Configuration

For the local firewall you can also use the puppetlabs-firewalld module:

puppet module install puppetlabs-firewalld

(NOTE this is just for DPM <= 1.14) cp -r /usr/share/puppet/modules/firewalld /usr/share/dmlite/puppet/modules/

(alternatively) puppet module install puppet-firewalld

(alternatively) cp -r /etc/puppetlabs/code/environments/production/modules/firewalld/ /usr/share/dmlite/puppet/modules/

and add the following to your manifest in order to allow all the DPM protocols:

class { '::firewalld': }

  short       => 'dpmsvc',
  description => 'DPM disk server',
  port        => [
        'port'     => '80',
        'protocol' => 'tcp',
        'port'     => '443',
        'protocol' => 'tcp',
        'port'     => '1094',
        'protocol' => 'tcp',
        'port'     => '1095',
        'protocol' => 'tcp',
        'port'     => '2170',
        'protocol' => 'tcp',
        'port'     => '2811',
        'protocol' => 'tcp',
        'port'     => '5001',
        'protocol' => 'tcp',
        'port'     => '20000-25000',
        'protocol' => 'tcp',


firewalld_service { 'DPM disk service':
    ensure  => 'present',
    service => 'dpmsvc',
    zone    => 'public',

This example assumes that the default zone for firewalld is the 'public' zone.

puppet-dpm module

The puppet-dpm module has been developed to ease the set up of a DPM installation via puppet.

It can be used to set up different DPM installations :

  • DPM Headnode ( with or without a local DB)
  • DPM Disknode
  • DPM Head+Disk Node ( with or without a local DB)


It relies on several puppet modules, some of them developed @ CERN and some others available from third party.

The following modules are needed in order to use this module, and they are installed when using the dmlite-puppet-dpm package or from puppetforge

  • lcgdm-gridftp
  • lcgdm-dmlite
  • lcgdm-lcgdm
  • lcgdm-xrootd
  • lcgdm-voms
  • puppetlabs-stdlib
  • puppetlabs-mysql
  • saz-memcached
  • CERNOps-bdii
  • puppet-fetchcrl
  • puppetlabs-firewall

Installation & Upgrade


Starting from DPM 1.10.0 the puppet modules needed for the configuration are available in EPEL in a new package

yum install dmlite-puppet-dpm

this package will install the needed modules under /usr/share/dmlite/puppet/modules/.

In case of Puppet Infrastructure the installation need to be performed on the Puppet Master.


puppet modules are also available via puppetforge

puppet module install lcgdm-dpm

in order to upgrade to a new version just type

puppet module upgrade lcgdm-dpm

but we encourage to use the rpm installations.


We also provide a Puppetfile so to be used via r10k or bolt.

The latest Puppetfile is always available at


The module folder tests contains some examples, for instance you can set up a DPM box with both HEAD and DISK nodes with the following code snippet

   localdomain                    => '',
   db_user                        => 'dpmdbuser',
   db_pass                        => 'PASS',
   db_host                        => 'localhost',
   disk_nodes                     => ['<FQDN of this disk serrver 1>', '<FQDN of disk server 2>' .....],
   mysql_root_pass                => 'ROOTPASS',
   token_password                 => 'A32TO64CHARACTERA32TO64CHARACTER',
   xrootd_sharedkey               => 'ANOTHER32TO64CHARACTERA32TO64CHARACTER',
   site_name                      => 'CNR_DPM_TEST',
   volist                         => [dteam, lhcb],
   new_installation               => true,
   mountpoints                    => ['/srv/dpm','/srv/dpm/01'],
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                      => 'your headnode host cert DN'

the same parameters can be configured via hiera ( see the dpm::params class)

Having the code snippet saved in a file ( i.e. dpm.pp), then you just need to run:

For DPM >= 1.10.0 when using the dmlite-puppet-dpm package:

puppet apply  --modulepath /usr/share/dmlite/puppet/modules <your manifest>.pp 

for DPM < 1.10.0:

puppet apply <your manifest>.pp 

to have the DPM box installed and configured

Please note that it could be needed to run twice the puppet apply command in order to have all the changes correctly applied

Using Hiera

As said all parameters in the dpm::params class can be also included in a hiera configuration file without passing then to the class.

By default the hiera data can be stored in the folder /var/lib/hiera/ ( check for details)

The simplest way to use the configuration via hiera is to include the parameters inside the files


Puppet4 and 5

The headnode configuration can be expressed as follows:

dpm::params::localdomain: ""
dpm::params::token_password: "TOKEN_PASSWORD"
  - "dteam"
  - "lhcb"
dpm::params::dpmmgr_uid:  500
dpm::params::db_user: 'dpmdbuser'
dpm::params::db_pass: 'PASS'
dpm::params::db_host: 'localhost'
dpm::params::local_db:  true
dpm::params::mysql_root_pass: 'MYSQLROOT'
dpm::params::site_name: 'CNR_DPM_TEST'
dpm::params::new_installation: false
dpm::params::configure_dome: true
dpm::params::configure_domeadapter: true

dpm::params::host_dn: 'hostDN' </verbatim>

and the manifest just include this line:

include dpm::headnode

while for for the disknode configuration could be expressed as follows:

dmlite::disk::log_level: 1
dpm::params::debug: true
dpm::params::headnode_fqdn: ""
  - "localhost"
dpm::params::localdomain: ""
dpm::params::token_password: "TOKEN_PASSWORD"
dpm::params::xrootd_sharedkey: "A32TO64CHARACTERKEYTESTTESTTESTTEST"
  - "dteam"
  - "lhcb"
dpm::params::dpmmgr_uid:  500
  - "/data"
  - "/data/01"
dpm::params::configure_dome: true
dpm::params::configure_domeadapter: true
dpm::params::host_dn: 'hostDN'

and the manifest just include this line:

include dpm::disknode

In order to use that configuration file we should explicitly pass it to the puppet apply command

puppet apply disknode.pp --hiera_config /etc/hiera.yaml

Using Puppet Master



The Headnode configuration is performed via the dpm::headnode class or in case of an installation of a Head+Disk node via the dpm::head_disknode class

   localdomain                => '',
   db_user                    => 'dpmdbuser',
   db_pass                    => 'PASS',
   db_host                    => 'localhost',
   disk_nodes                 => [''],
   local_db                   => true,
   mysql_root_pass            => 'MYSQLROOT',
   token_password             => 'kwpoMyvcusgdbyyws6gfcxhntkLoh8jilwivnivel',
   xrootd_sharedkey           => 'A32TO64CHARACTERA32TO64CHARACTER',
   site_name                  => 'CNR_DPM_TEST',
   volist                     => [dteam, lhcb],
   new_installation           => true,
   pools                    => ['mypool:100M'], /*optional*/
   filesystems          => ["mypool:${fqdn}:/srv/dpm/01"], /*optional*/
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                    => 'your headnode host cert DN'

N.B. Each pool and filesystem specified in the pools and filesystems parameter should have the following syntax:

  • pools: 'poolname:defaultSize'
  • filesystems : 'poolname:servername:filesystem_path'

They are taken into account only if configure_default_pool and configure_default_filesystem are set to true. This conf is needed only if the admin would like to manage the filesystems/pools via puppet.

DB configuration

Depending on the DB installation ( local to the headnode or external ) there are different configuration parameters to set:

In case of a local installation the db_host parameter should be configured as localhost together with the local_db parameter set to true. While for an external DB installation the local_db parameter should be set to false.

N.B. the root DB grants for the Headnode should be added manually to the DB in case of an external DB installation:


N.B. In case of an upgrade of an existing DPM installation the new_installation parameter MUST be set to false

the mysql_override_options parameter can be used to override the mysql server configuration. In general the values provided by default by the module ( via the $dpm::params::mysql_override_options var ) should be fine.

The DB management via puppet can be switched off by setting the parameter db_manage to false. In this case every DB configuration should be performed manually.

Other configuration

  • configure_bdii : enabled/disabled the configuration of Resource BDII ( default = true)
  • configure_default_pool : create the pools specified in the pools paramter ( default = false)
  • configure_default_filesystem : create the filesytems specified in the filesystems parameter ( default = false)
  • admin_dn: the DN of one of the admins, needed to enable HTTP drain
  • local_map: it can be used to add to the /etc/lcgdm-mapfile extra entries. This is needed for instance to add the Headnode DN mapping needed for the HTTP drain.

see the Common Configuration section for the rest of configuration options


The Disknode configuration is performed via the dpm::disknode class, as follows:

   headnode_fqdn                => "HEADNODE",
   disk_nodes                   => ['$::fqdn'],
   localdomain                  => '',
   token_password               => 'TOKEN_PASSWORD',
   xrootd_sharedkey             => 'A32TO64CHARACTERKEYTESTTESTTESTTEST',
   volist                       => [dteam, lhcb],
   mountpoints                  => ['/data','/data/01'], /*optional*/
   configure_dome                  => true,
   configure_domeadapter          => true,
   host_dn                     => 'your disknode host cert DN'

In particular the mountpoints var should include the mountpoint paths for the filesystems and the related parent folders.

N.B.It's not adivsed to use as storage mountpoints paths starting with /dpm as this is will create troubles to the WebDav frontend. If this is the case for your installation please contact the support.

See the Common Configuration section for the rest of configuration options

Common configuration

VO list and mapfile

Both Head and Disk nodes are configured by default with the list of the VOs supported and the configuration input to generate the mapfile.

Both the VOs and the gridmap configuration ( which is optional) can be disabled by using the following vars:

  • configure_gridmap : enable/disable the configuration of gridmap file ( default = true)
  • configure_vos : enable/disable the configuration of the VOs ( default = true)

The parameter volist is needed to specify the supported VOs, while the groupmap parameter specifies how to map VOMS users. By default the dteam VO mapping is given, an example for the whole LHC VOs mappings is as follows:

groupmap = {
  "vomss://"            => "atlas",
  "vomss://"      => "atlas",
  "vomss://"              => "cms", 
  "vomss://"        => "cms",
  "vomss://"              => "lhcb", 
  "vomss://"        => "lhcb",
  "vomss://"             => "alice", 
  "vomss://"      => "alice",
  "vomss://"               => "ops", 
  "vomss://"         => "ops",
  "vomss://"  => "dteam",
  "vomss://"  => "dteam"

we can otherwise define the groupmap and the volist via hiera as follows:

  - "dteam"
  - "lhcb"

   vomss:// 'dteam'
   vomss:// 'dteam'
   vomss:// 'lhcb'
   vomss:// 'lhcb'

N.B. The VOMS configuration of VO names with "." is not supported with this class (it will be ignored) therefore each vo of this type should be explicetly added to your manifest as follows:


and declared as a class like documented at

The localmap parameter can be used to define a mapping which is not retrieved from the VOMS.

this will add the mapping to the file /etc/lcgdm-mapfile-local and when running the edg-mkgridmap script the mapping will be available on the /etc/lcgdm-mapfile file

Xrootd configuration

The basic Xrootd configuration requires only to specifies the xrootd_sharedkey, which should be a 32 to 64 char long string, the same for all the cluster.

In case VOMS integration is not requested by the supported VO, the support can be disabled ( enabled by default ) via the parameter xrootd_use_voms.

N.B. The VOMS integration should be enabled/disabled on all cluster at once.

In order to configure the Xrootd Federations and the Xrootd Monitoring via the parameter dpm_xrootd_fedredirs, xrd_report and xrd_monitor please refer to the DPM-Xrootd puppet guide for more details

as any other configuration parameters, xrootd parameters can also be configured via hiera as follows:

      name: 'fedredir_atlas'
      fed_host: ''
      namelib_prefix: '/dpm/'
      xrootd_port: 1094
      cmsd_port: 1098
      local_port: 11000
      namelib: ' pssorigin=localhost sitename=ATLAS_SITENAME'
         - '/atlas'
dpm::params::xrootd_monitor: 'all user flush 30s fstat 60 lfn ops xfr 5 window 5s dest fstat info redir'
dpm::params::xrd_report: ' every 60s all -buff -poll sync'

Other configuration

  • configure_repos : configure the yum repositories specified in the repos parameter ( default = false)
  • gridftp_redirect : enabled/disabled the GridFTP redirection functionality (default = false)
  • dpmmgr_user , dpmmgr_uid and dpmmgr_gid : the dpm user name , gid and uid ( default = dpmmgr, 151 and 151)
  • debug : enable/disable installation of the debuginfo packages ( default = false)

Advanced configuration

Here we collect some of the configuration steps neede to support use case like DPM-Argus, DPM Accounting and WLCG Storage Resource Reporting.

DPM Argus

DPM doesn't query Argus for each storage request, but instead use its user/group database where accounts can be flagged as banned. This is not perfect (realtime) solution, but it should satisfy general requirements:

Argus support has been desirable from EGI and WLCG viewpoints since many years, the main use case has always been the banning ("temporary suspension") of DNs. A response time of 6 hours (like for CRLs) would already be great, though 1 hour would be an improvement that should not cause any Argus services to be stressed.


Since dmlite 1.13.3 it is possible to use directly dmlite-shell to synchronize ban status with Argus server. This interface use directly Argus WSDL interface to obtain its configuration and doesn't depend on external packages. DPM dmlite-shell takes into account only subset of banning rules that applies on all resources and actions (execute pap-admin list-policies --action '.*' --resource '.*' --all on Argus server to see configuration considered by DPM) and ignore sections with permit rules. This should be sufficient for central banning and DPM accounts can be updated simply by

dmlite-shell -e 'userban * ARGUS_SYNC'
dmlite-shell -e 'groupban * ARGUS_SYNC'

DPM allows administrator to apply also LOCAL_BAN and users/groups banned locally are not modified by Argus ban status synchronization (only NO_BAN and ARGUS_BAN are considered). To be able to deny access for Argus banned users that never used DPM storage (their credential certificate subject is not in DPM database) ARGUS_SYNC also create DPM accounts for all banned users with ARGUS_BAN status.

Argus configuration access via WSDL is protected and it is necessary to add permissions for DPM headnode host certificate (certificate subject can be obtained by openssl x509 -noout -in /etc/grid-security/hostcert.pem -subject). Access permissions on Argus server must be added in /etc/argus/pap/pap_authorization.ini configuration file in the [dn] section, e.g.


For ARGUS_SYNC it is always necessary to specify Argus server, but it is possible to specify not just site-argus hostname. You can specify also port, subset of paps defined on Argus server in /etc/argus/pap/pap_configuration.ini and also for redundancy it is possible to specify list of Argus servers, e.g.

dmlite-shell -e 'userban * ARGUS_SYNC'
dmlite-shell -e 'userban * ARGUS_SYNC'
dmlite-shell -e 'userban * ARGUS_SYNC'
dmlite-shell -e 'userban * ARGUS_SYNC'
dmlite-shell -e 'userban * ARGUS_SYNC|'
dmlite-shell -e 'userban * ARGUS_SYNC|'

Cron configuration can be automatically added by defining argus_banning headnode puppet configuration option, e.g.

clase {
        argus_banning => '',

Legacy DPM (DPM without DOME)

The DPM Argus integration is performed via a script which is available on the UMD repos ( due to some missing deps we cannot released it to EPEL)

The lcgdm module contains a class which can be used to configure the cron running the script

This class will be automatically used if you create similar configuration as mentioned in DOME case

  argus_banning  => '',

DPM Accounting

Space accounting for WLCG/EGI is implemented as well as a puppet module that adds cron configuration to execute this script daily:

  configure_star => true,

Hierra can be also used to enable StAR accounting and in that case headnode site_name is automatically used also for EGI storage record "Site"

dpm::params::configure_star: true

Be aware that DPM < 1.15.0 doesn't provide right puppet configuration for APEL AMS interface and old SSM is no longer supported by EGI since summer 2021.

WLCG Storage Resource Reporting

Publishing storagesummary.json with is currently not configuratble via puppet and you have to follow manual instruction for SRR, but since dmlite 1.13.2 this information is automatically provided by CGI script that can be reached at This means that manual configuration is necessary only for DPM installations that doesn't provide access with HTTP protocol (e.g. HTTPs port is blocked by firewall).

Upgrade / release notes

DPM 1.9.0

When moving to DPM 1.9.0 there are some changes to apply to the manifests

Legacy Flavour

No changes are needed

Dome Flavour

As a preliminary operation, the namespace folder counters need to be initialised to the correct numbers, see Enabling DOME for details.

New puppet configuration options:

configure_dome => true
configure_domeadapter => true

DPM >= 1.10.0

When moving to DPM >= 1.10.0 there are some changes to apply to the manifests

Legacy Flavour

No changes are needed

Dome Flavour

For DPM DOME enabled in v 1.9.0 upgrade should be straigforward otherwise you have to follow Enabling DOME instructions.

Conf changes to apply:

configure_dome => true
configure_domeadapter => true
host_dn  => 'your  host cert DN'

N.B The token_password parameter from version 1.10.0 MUST be now a string with more than 32 chars

Legacy-Free Dome Flavour

It's possible to move an existing Legacy installation to Legacy-free or just install from scratch a new DPM 1.10.0 Legacy-free

The Legacy installation packages are not automatically removed by puppet, so the following package have to be removed manually:


  • dpm-devel
  • dpm
  • dpm-python
  • dpm-rfio-server
  • dpm-server-mysql
  • dpm-srm-server-mysql
  • dpm-perl
  • dpm-name-server-mysql
  • dmlite-plugins-adapter
  • dmlite-plugins-mysql


  • dpm-devel
  • dpm
  • dpm-python
  • dpm-rfio-server
  • dpm-perl
  • dmlite-plugins-adapter

Then the change to apply to the conf is:

configure_legacy    => false

on both dpm::headnode and dpm::disknode classes ( or via hiera).

For gridftp transfer it's recommended to enable gridftp redirection in order to improve the performance

The change to apply to the conf is:

gridftp_redirect    => true

on both dpm::headnode and dpm::disknode classes ( or via hiera).

DPM 1.12.0

Starting from DPM 1.12.0 and the release of XrootD 4.9, 2 new features can be now enabled on DPM also via puppet

  • Support for X509 delegated credentials for XrootD TPC
  • Support for XrootD Checksums

X509 Delegation for XrootD TPC

Starting from version 4.9, XrootD clients are able to delegate X509 credentials to servers when performing the TPC.

In order to use those credentials and enable XrootD TPC via X509 the diskservers configuration needs to be updated.

In the dpm::disknode class just enable this parameter:

configure_dpm_xrootd_delegation =>   true

XrootD Checksums

XRootD checksums support need to be enabled on both head and disks.

In order to enable it you just need to add this parameter to the classes dpm::headnode and dpm::disknode

configure_dpm_xrootd_checksum =>   true

DPM 1.13.0

in DPM 1.13.0 the configure_dpm_xrootd_checksum and configure_dpm_xrootd_delegation are now eanbled by default.

Macaroon Secret

The following parameter has been added to the conf in order to configure the macaroon secret on the headnode.

on the dpm::headnode class or dpm::head_disknode

http_macaroon_secret =>   'your random macaroon secret of 64 chars'

XrootD TPC options

In addition the xrootd_tpc_options has been also added to configure the TPC options on the disknode,

on the dpm::disknode class

xrootd_tpc_options =>   'xrootd tpc options'

by default the option 'xfr 25' has been added, in order to raise the maximum number of XrootD TPC to 25 ( the default in XrootD is 9)

DPM 1.13.2

WLCG SRR publishing via CGI

SRR is automatically published with HTTPS protocols by default. This can be currently modified only by setting hiera dmlite::dav::params::enable_srr_cgi value to false

DPM 1.14.0

Dropped support for puppet 4 - upgrade to supported (tested) version 5 or 6. GridFTP redirection gridftp_redirect is now enabled by default, because otherwise transfered data for gsiftp:// protocol would get tunneled through DPM headnode. This configuration is still compatible with legacy SRM, but gsiftp:// transfers also rely on GridFTP redirection. Be aware that client must also support this feature (uberftp, lcg-cp doesn't support redirection, globus-url-copy must be called with -dp parameter) otherwise data gets tunneled through DPM headnode.

Argus ban status synchronization

Sychronize user banned (centrally) by Argus, see DPM Argus section for details

OpenID -Connect and WLCG bearer tokens

The following parameter has been added to the conf in order to configure the OIDC on the dpm::headnode class or dpm::head_disknode

oidc_clientid         => '< The OIDC Client ID for this service >',
oidc_clientsecret     => '< The OIDC Client Secret for this service >',
oidc_passphrase       => '< The OIDC crypto passphrase >',
oidc_allowissuer      => ['"/dpm/" "" wlcg', '"/dpm/" "" cms'],
oidc_allowaudience    => [''],

For details see documentation for DPM OIDC manual configuration

DPM 1.14.2

CentOS7 version build with XRootD 5.x

DPM 1.15.0

This is mostly bugfix release with improved stability and minor performance improvements.

  • CentOS7 supported (SLC6 EOL and no longer supported, CentOS8 available for testing)
  • option to disable IP based security for HTTP, because it can cause failures with dualstack DPM (new TokenId value none, all DPM nodes must be upgraded)
  • Fixed concurrency issue in HTTP protocol that lead to random transfer failures (increasing failure rate for long transfers and stressed storage)
  • use same TLS versions and list of ciphers for all protocols
  • CRL support for HTTP-TPC
  • dmlite-shell supports python3 and most of tools/script integrated (e.g. dbck, lost and dark data check, storage dumps, srr, apel, ...)
  • dmlite-shell provides interface to all DOME API functions
  • basic runtime reconfiguration (e.g. modify loglevel without restart) and file timestamp updates added in DOME API
  • minor improvements in OIDC support (token support should be still considered just for tests / not fully compliant with WLCG JWT profile)

DPM 1.15.1

Bugfix release

  • fix compatibility issues that prevented interoperability between 1.14.x and 1.15.0 headnode / disknodes
  • fix problems with GridFTP protocol on CentOS8 disknodes
  • fix issues with StAR accounting called via dmlite-shell
  • allow to ignore missing CRLs when validation certificates for HTTP-TPC (enabled by default)

DPM 1.15.2

  • included dCache migration tools that provide one day downtime simple transition from DPM to dCache
  • improved dbck in dmlite-shell with additional functionality (e.g. calculation of missing checksums)
  • support for HTTP MOVE operation with macaroon token
  • protect apache from too slow client / infinite transfers LCGDM-2980
    • minimum speed for download is 10kB/s
    • maximum download time 12 hours
  • return macaroon token also for / and /dpm path
  • puppet configurable mysql port for dmlite database
  • use correct database encoding latin1 in the dmlite-shell
  • metadata cache expiration time runtime modification LCGDM-2988 (drop cached data)
  • puppet VOMS module update based on CERN source
  • cleanup of gridmap configuration and documentation
    • gridmap files are deprecated for many years and not useful besides DPM web interface
    • new IAM VOMS doesn't provide anonymous interface to obtain VO certificate subjects


The module can configure a DPM on RHL 6, 7, 8 system compatible

DPM 1.14+ was tested with puppet 5 and 6

Mysql 5.1 and 5.5 are supported as well as MariaDB 5.5 on RHL 7

Known Issues

fetchcrl module name change

Starting from puppet-dpm v 4.8 one of the dependency ( CERNops-fetchcrl ) has been renamed to puppet-fetchcrl , therefore upgrading to this version or newer will fail cause of name clashing, with an error like:

Notice: Downloading from ...
Error: Could not install module 'puppet-fetchcrl' (latest)
  Installation would overwrite /etc/puppet/modules/fetchcrl
    Currently, 'CERNOps-fetchcrl' (v1.0.0) is installed to that directory
    Use `puppet module install --force` to install this module anyway

You need first to remove the CERNOps-fetchcrl before proceeding the upgrade

puppet module uninstall --force CERNOps-fetchcrl

voms-clients package name

In case the installation is performed using the UMD repositories, there the package name of the voms-client is still using the and old name ( voms-clients) different from the one available in EPEL (voms-clients-cpp)

For this reason the installation of the voms clients will fail. To fix this just add to your hiera file:

 - voms-clients


Some of the Puppet modules needed by DPM are developed by the DPM team with great help from the community:

  • lcgdm-dpm
  • lcgdm-gridftp
  • lcgdm-dmlite
  • lcgdm-lcgdm
  • lcgdm-xrootd

the source code is available within the dmlite source tree at

In order to contribute to those modules, please open a PR in the CERN Gitlab ( A CERN account is needed), or alternately contact to send a patch.

The lcgdm-voms module code is available at

and it's maintained by the HEP admins community.

The rest of the modules are taken from puppetforge

  • puppetlabs-stdlib
  • puppetlabs-mysql
  • saz-memcached
  • CERNOps-bdii
  • puppet-fetchcrl
  • puppetlabs-firewall
Edit | Attach | Watch | Print version | History: r68 < r67 < r66 < r65 < r64 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r68 - 2022-06-22 - PetrVokac
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    DPM All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback