LCG Disk Pool Manager (DPM) administrator's guide
Functional description
See DPM General Description
Daemons running
See DPM Daemon Description
Command line arguments can be found on the documentation
On the DPNS server machine :
On each disk server managed by the
DPM :
On the
DPM and SRM server machine(s) :
- /opt/lcg/bin/dpm
- /opt/lcg/bin/srmv1
- /opt/lcg/bin/srmv2
- /opt/lcg/bin/srmv2.2
On each disk server managed by the
DPM :
- /opt/globus/sbin/globus-gridftp-server
If Xrootd is configured:
- /opt/lcg/bin/dpm-manager-xrootd
- /opt/lcg/bin/dpm-manager-olbd
- /opt/lcg/bin/dpm-xrootd
- /opt/lcg/bin/dpm-olbd
WebAccess:
Init scripts and options (start|stop|restart|...)
- /etc/init.d/dpnsdaemon {start|stop|status|restart|condrestart}
- /etc/init.d/rfiod {start|stop|status|restart|condrestart}
- /etc/init.d/dpm {start|stop|status|restart|condrestart}
- /etc/init.d/srmv1 {start|stop|status|restart|condrestart}
- /etc/init.d/srmv2 {start|stop|status|restart|condrestart}
- /etc/init.d/srmv2.2 {start|stop|status|restart|condrestart}
- /etc/init.d/dpm-gsiftp {start|stop|restart|reload|condrestart|status}
- Xrootd disk server:
- service dpm-xrd <start|status|stop...>
- service dpm-olb <start|status|stop...>
- Xrootd head node:
- service dpm-manager-xrd
- service dpm-manager-olb
- To start the DPM apache service run o /etc/init.d/dpm-httpd start
- To stop the DPM apache service run o /etc/init.d/dpm-httpd stop
- To check the status run o /etc/init.d/dpm-httpd status
- To restart the DPM apache service o /etc/init.d/dpm/httpd restart
Configuration files location with example or template
See DPM Configuration
By default, the database configuration files are :
- /opt/lcg/etc/NSCONFIG
- /opt/lcg/etc/DPMCONFIG
Configuration:
- /etc/sysconfig/dpnsdaemon
- /etc/sysconfig/rfiod
- /etc/sysconfig/srmv1
- /etc/sysconfig/srmv2.2
- /etc/sysconfig/srmv2
- /etc/sysconfig/dpm-gsiftp
- /etc/sysconfig/dpm
- /etc/sysconfig/dpm-xrd
- lcgdm-mapfile
- /etc/shift.conf
Xrootd:
WebAccess:
- Run the configuration script depending if you have a single head/disk-node or a seperate head- and seperate disk-nodes o /opt/lcg/etc/dpm/https/conf/dpm-https-conf.sh --type sor o /opt/lcg/etc/dpm/https/conf/dpm-https-conf.sh --type head-node o /opt/lcg/etc/dpm/https/conf/dpm-https-conf.sh --type disk-node
- Whenever you add or remove disks to a DPM pool, you have to run on the disk node o /opt/lcg/etc/dpm/https/conf/dpm-https-conf.sh --pools
Logfile locations (and management) and other useful
By default, the log files are :
- /var/log/dpns/log
- /var/log/dpm/log
- /var/log/srmv1/log
- /var/log/srmv2/log
- /var/log/rfiod/log
- /var/log/dpm-gsiftp/gridftp.log
- /var/log/dpm-gsiftp/dpm-gsiftp.log
- /var/log/xroot/
- /var/log/dpm-httpd/access
- /var/log/dpm-httpd/errors # The redirection CGI script writes a special log file under /var/log/dpm-httpd/cgilog. Errors are logged also in syslog
Open ports
- DPM server: port 5015/tcp must be open locally at your site at least (can be incoming access as well)
- DPNS server: port 5010/tcp must be open locally at your site at least (can be incoming access as well)
- SRM servers: ports 8443/tcp (SRMv1), 8444/tcp (SRMv2) and 8446/tcp (SRMv2.2) must be opened to the outside world (incoming access)
- RFIO server: port 5001/tcp and data ports 20000-25000/tcp must be open to the outside world (incoming access), in the case your site wants to allow RFIO access from outside
- Gridftp server: control port 2811/tcp and data ports 20000-25000/tcp (or any range specified by GLOBUS TCP PORT RANGE) must be opened to the outside world (incoming access)
- Xrootd: The manager xrootd listens by default on port 1094. The diskserver xrootd port listens by default on port 1095. You have to allow incoming access towards this ports from expected client machines. All transfers run via port 1095 - there is no need for an open high port range.
- WebAccess:
- Head-Node o Virtual Host Port 443 + main entrance point for web file access to a DPM 'https://<dpm-headnode>/dpm/cern.ch/filename' with default redirection to HTTP transport o Virtual Host Port 883 + main entrance point for web file access to a DPM with forced redirection to HTTPS transport ( slower )
- Disk-Server o Virtual Host Port 777 + tranpsport endpoint for HTTP web file access with redirector authorization o Virtual Host Port 884 + tranpsport endpoint for HTTPS web file access with redirector authorization
Possible unit test of the service
UI , WN and from itself
Where is service state held (and can it be rebuilt)
In the filesystem and in the back-end database.
Cron jobs
None
Security information
- This section contains information on security service about DPM.
Access control Mechanism description (authentication & authorization)
DPM uses X509 and VOMS based authentication information for authorization. Once the clients are authenticated
the X509 certificate DN is used as a
user name and the VOMS FQANs are used as
group names. See more at
DPM virtual users and groups.
Apart from this
DPM's authorization follows the POSIX filesystem authorization semantics.
How to block/ban a user
If a user does not use a VOMS certificate, then one could "ban" the user by mapping it to an invalid group via the
gridmap-file
:
- If it is necessary to ban a user on a CE, the following step:
- Then rebuild the grid-mapfile by execute:
- /opt/edg/sbin/edg-mkgridmap --output=/etc/grid-security/grid-mapfile --safe
- and then rebuilding the lcgdm-mapfile by running:
- /opt/edg/libexec/edg-mkgridmap/edg-mkgridmap.pl --conf=/opt/lcg/etc/lcgdm-mkgridmap.conf --output=/opt/lcg/etc/lcgdm-mapfile --safe
On the other hand most clients come with VOMS certificates, so banning this way would not work until
BUG:43710
is resolved.
Network Usage
See above at
#Open_ports
Firewall configuration
See above at
#Open_ports
Security recommendations
Disable HTTP and Xroot access, if not needed. The Yaim options are:
DPM_HTTPS="no"
DPM_XROOTD="no"
Virtual Users
The users and groups are virtual for the storage,
so it should not interfere with the real users of the system.
All files are owned by 'dpmmgr' in the default configuration.
According to the original design of
DPM these should enable
one to install
DPM disk nodes on Worker Nodes.
Trusted Nodes
The role of the
DPM nodes are special in the default configuration,
because they are "trusted" by their IP address (host name), which means
services running on that node can impersonate users without delegated credentials.
For example the SRM front-end can interact with the
DPM name service
on behalf of the client without doing a certificate based authentication.
This allows the name service to do the usual authorization checks,
however avoids the extra cost of certificate based authentication.
In the default configuration the request's source IP address is the
basis for the trust relation, so one should not run any services on
the
DPM nodes, which could allow an attacker to abuse this feature.
If such a configuration is unavoidable, then one could enable
strong authentication among the services, while disabling the
IP based trust relationship.
Security incompatibilities
Do not install UI or WN on any of the
DPM knows in the default deployment model.
List of externals (packages are NOT maintained by Red Hat or by gLite)
xrootd daemon, see more at
LCG.DpmXrootAccess
Other security relevant comments
None.
Utility scripts
GridPP DPM administation toolkit
Monitoring
GridPP DPM monitoring system
Location of reference documentation for users
See DPM Documentation
Location of reference documentation for administrators
See DPM Documentation
See Xrootd Documentation
See web access on DPM Documentation
--
RicardoRocha - 29-Oct-2010