TWiki
>
CMSPublic Web
>
CMSPublicData
>
CompProjOffice
>
CompProjSpaceMon
>
SpaceMonSiteAdmin
(revision 78) (raw view)
Edit
Attach
PDF
---+ Instructions for Site Admins This page describes steps for CMS Space Monitoring deployment at the sites. In order to meet the [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompProjOffice][timelines]], system deployment was split in two phases: | *Phase I* | Accumulate and store storage usage info locally at the sites | | *Phase II* | Aggregate and publish data into central database | The Phase I is now complete. Instructions have been updated on Feb 7th, 2017. %TOC% ---++ Space Monitoring deployment at the site %COMPLETE5% ---+++ Step 1: Locate/produce working tool to create storage-dumps Storage-dump tools are storage technology specific. We maintain a common [[https://github.com/dmwm/DMWMMON][repository]] for the CMS supported storage technologies. <br> There you will find instructions and scripts developed by CMS/ATLAS site admins and/or references for the tools provided with the storage solutions, as well as sample storage-dumps. If you use your own storage-dump tool, please follow storage-dump formats as described at [[https://twiki.cern.ch/twiki/bin/view/LCG/ConsistencyChecksSEsDumps#Format_of_SE_dumps][https://twiki.cern.ch/twiki/bin/view/LCG/ConsistencyChecksSEsDumps#Format_of_SE_dumps]]. NOTE: the requirements differ from the dump used for [[https://twiki.cern.ch/twiki/bin/view/CMS/StorageConsistencyCheck][PhEDEX Storage Consistency Check]] in the following ways: * Dump should contain all files on CMS storage, including data outside the official CMS datasets * Dump must provide file size and full path as seen on the local storage (direct PFN) * Dump files in =txt= format must have timestamp encoded in the file name, dumps in =xml= format must contain tag =dump= with attribute =recorded=, see examples in [[https://github.com/dmwm/DMWMMON/tree/master/SiteInfoProviders][https://github.com/dmwm/DMWMMON/tree/master/SiteInfoProviders]] Please do not hesitate to contribute your tools and bug-fixes to the common repository. <br> You can fork the repository and make a pull request to merge your branch, or you can ask Eric for write-access. ---+++ Step 2: Install and configure the client tool. ---++++ Using spacemon-client installed on CVMFS The spacemon-client releases are now automatically deployed on CVMFS. <br> Assuming the cvmfs client is installed on your machine, you can start using spacemon-client directly from cvmfs cache, usually mounted as =/cvmfs= . <br> No setup is necessary. For convenience you may link the Utilities directory from preferred release to your working directory: <verbatim> cd ~/mywork ln -s /cvmfs/cms.cern.ch/spacemon-client/slc6_amd64_gcc493/cms/spacemon-client/1.0.2/DMWMMON/SpaceMon/Utilities/ ./ ./Utilities/spacemon -h </verbatim> To install CVMFS client, you can use [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/CernVMFS4cms][instructions on how to set up the CVMFS client on CMS worker nodes]] as a reference, skipping the jobs configuration related steps. ---++++ Local installation as CMS rpm %TWISTY{mode="div" showlink="Show instructions " hidelink="Hide instructions" firststart="Hide" showimgright="%ICONURLPATH{toggleopen-small}%" hideimgright="%ICONURLPATH{toggleclose-small}%"}% 1 Create a directory for software installation:<verbatim> mkdir sw export sw=$PWD/sw </verbatim> 1 Bootstrap externals, this is only needed once per architecture:<verbatim> myarch=slc6_amd64_gcc493 repo=comp </verbatim> 1 Now configure the CMS software area and search for available spcamon-client releases: <verbatim> wget -O $sw/bootstrap.sh http://cmsrep.cern.ch/cmssw/repos/bootstrap.sh sh -x $sw/bootstrap.sh setup -path $sw -arch $myarch -repository $repo 2>&1|tee $sw/bootstrap_$myarch.log $sw/common/cmspkg -a $myarch update $sw/common/cmspkg -a $myarch search spacemon-client </verbatim> 1 Finally, install the desired version: <verbatim> version=1.0.2 $sw/common/cmspkg -a $myarch install cms+spacemon-client+$version </verbatim> 1 To test start a new session with a clean environment <verbatim> myarch=slc6_amd64_gcc493 sw=`pwd`/sw source $sw/$myarch/cms/spacemon-client/1.0.2/etc/profile.d/init.sh grid-proxy-init spacemon -h </verbatim> %ENDTWISTY% ---++++ Installation from the github repository This method is preferred for testing and development. <pre> git clone https://github.com/dmwm/DMWMMON.git cd DMWMMON/SpaceMon git checkout spacemon-client_1_0_2 Utilities/spacemon -h </pre> ---++++ Configure local aggregation parameters (optional) Spacemon new configuration feature allows to specify the level of depth at which directories in [[https://twiki.cern.ch/twiki/bin/view/CMS/DMWMPG_Namespace][CMS DMWMOG Namespace] are monitored. To view a set of globally defined configuration rules, try <verbatim> spacemon --defaults </verbatim> User can override or add more rules in the local configuration file, defining %USERCFG perl hash with rules in terms of PFNs, as shown in the example. %TWISTY{mode="div" showlink="Show example:" hidelink="Hide example" firststart="hide" showimgright="%ICONURLPATH{toggleopen-small}%" hideimgright="%ICONURLPATH{toggleclose-small}%"}% <verbatim>%USERCFG = ( '/' => 3, '/localtests/' => -1, '/dcache/uscmsdisk/store/user/' => 3, '/dcache/uscmsdisk/store/' => 4, ); </verbatim> %ENDTWISTY% Namespace rules values define how many directory levels under the specified path are monitored. | *Depth value* | *Resulting behavior* | | 0 | - total size of the directory is monitored, the contents are concealed | | 1 | - the directory and immediate sub-directories are monitored | | 2 (or 3, 4, ..) | - two or more levels of sub-directories are monitored | | -1 (negative int) | - exclude all contents of the directory from the monitoring record | Spacemon will look for user's configuration in ~/.spacemonrc, this location can be overwritten with --config option. ---+++ Step 3: Manually upload storage records to the central database ---++++ Enable authentication Upload to the central monitoring service requires certificate based authentication: * Make sure you have _site admin_ role for your site defined in the [[https://cmsweb.cern.ch/sitedb/prod/sites][SiteDB]] . * Make sure perl-Crypt-SSLeay rpm package is installed on the node where you do the upload. This package provides support for the https protocol used for the upload. * An RFC 3280-compliant proxy with at least 1024-bit key strength is required. To verify your authentication use =spacemon --check-auth== command. See =spacemon -h= for authentication related options. ---++++ Upload your record By default spacemon prints the generated monitoring record on the standard output. To force the upload, add the =--upload== option. For example: <verbatim> spacemon --dump mystoragedump.1486494092.txt.tgz --node T2_MY_NODE --upload </verbatim> ---++++ Update your entry in this table Once the upload step is complete, please add an entry for your site in the table below. %EDITTABLE{editbutton}% | *Site Name* | *Date* | *Storage Technology* | | T2_AT_Vienna | 2015-06-23 | dpm | | T2_BE_IIHE | | | | T2_BE_UCL | 2015-04-16 | POSIX | | T2_BR_SPRACE | | | | T2_BR_UERJ | 2015-06-29 | HDFS | | T2_CH_CERN | 2015-06-19 | !EOS | | T2_CH_CSCS | 2014-06-05 | dCache | | T2_CN_Beijing | 2015-06-01 | dCache | | T2_DE_DESY | 2015-04-20 | dCache | | T2_DE_RWTH | 2017-03-18 | dCache | | T2_EE_Estonia | | | | T2_ES_CIEMAT | 2014-05-12 | dCache | | T2_ES_IFCA | 2015-05-28 | POSIX (GPFS) | | T2_FI_HIP | 2015-04-09 | dCache | | T2_FR_CCIN2P3 | 2014-10-23 | dCache | | T2_FR_GRIF_IRFU | 2014-08-01 | dpm | | T2_FR_GRIF_LLR | 2014-05-28 | dpm | | T2_FR_IPHC | 2014-10-16 | dpm | | T2_GR_Ioannina | | | | T2_HU_Budapest | 2014-05-14 | dpm | | T2_IN_TIFR | 2017-03-29 | dpm | | T2_IT_Bari | | | | T2_IT_Legnaro | 2015-06-01 | dCache | | T2_IT_Pisa | 2017-02-24 | posix (GPFS) | | T2_IT_Rome | 2014-05-20 | dCache | | T2_KR_KNU | 2014-05-20 | dCache | | T2_MY_UPM_BIRUNI | | | | T2_PK_NCP | 2015-08-13 | dpm | | T2_PL_Warsaw | | | | T2_PL_Swierk | 2015-05-14 | dpm | | T2_PT_NCG_Lisbon | 2015-09-19 | posix | | T2_RU_IHEP | | | | T2_RU_INR | 2015-07-07 | dpm | | T2_RU_ITEP | | | | T2_RU_JINR | | | | T2_RU_PNPI | 2016-03-26 | dpm | | T2_RU_RRC_KI | | | | T2_RU_SINP | | | | T2_TH_CUNSTDA | | | | T2_TR_METU | | | | T2_TW_Taiwan | | | | T2_UA_KIPT | 2016-04-11 | DPM | | T2_UK_London_Brunel | 2015-06-06 | DPM | | T2_UK_London_IC | | | | T2_UK_SGrid_Bristol | 2015-04-24 | posix (GPFS/HDFS) | | T2_UK_SGrid_RALPP | | | | T2_US_Caltech | 2014-06-03 | HDFS | | T2_US_Florida | 2014-05-02 | Lustre | | T2_US_MIT | 2015-05-05 | HDFS | | T2_US_Nebraska | 2015-04-20 | HDFS | | T2_US_Purdue | 2015-04-03 | HDFS | | T2_US_UCSD | 2014-11-03 | HDFS | | T2_US_Vanderbilt | 2015-04-27 | LStore | | T2_US_Wisconsin | 2014-11-25 | HDFS | ---+++ Step 4: Produce storage-dumps and upload records routinely Sites are asked to upload storage usage records once per week. <br> Usually this involves setting up one cronjob to produce storage dumps, and another cronjob to run the spacemon command. <br> Second cron job should have access to the storage dump file and to a valid proxy file. <br> We recommend to use the voms proxy certificate maintained for the !PhEDEx data transfers, please see [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhedexAdminDocsInstallation?redirectedfrom=CMS.PhedexAdminDocsInstallation#Certificate_Management][ certificate management details]]. ---++ FAQ Frequently asked questions by the site admins : ---+++ Q: Which sites are required to deploy CMS space monitoring *Answer:* All Tier-1 and Tier 2 sites should report space usage information for each PhEDEx endpoint node except MSS and Buffer types. ---+++ Q: How often do the sites need to report their CMS storage space usage *Answer:* reports are to be produced and uploaded weekly . In case of problems with upload, e.g. is authentication expires, the sites can keep a local copy of the storage dumps, and upload it later. <br> The dump file name (or xml Recorded tag) must contain the timestamp when the storage dump was collected. ---+++ Q: What are the prerequisites for the authorized upload *Answer:* The upload command requires a valid certificate with a DN registered in the [[https://cmsweb.cern.ch/sitedb/prod/sites][CMS SiteDB]] to a person that has a ~site admin~ role for the site. ---+++ Q: How to check if the upload was successful *Answer:* The dates of the most recent sites reports are periodically synchronized with the *Space Check* metric in [[http://dashb-ssb.cern.ch/dashboard/request.py/siteview#currentView=test&highlight=true][CMS dashboard]] . <br> To initiate the real-time update, click on the 'date' in the 'Space Check' metric column next to the respective site name. <br> Or use [[cmsweb.cern.ch/dmwmmon/doc][DMWMMON data service APIs]] to get back your records. ---+++ Q: How to report problems and get help: *Answer:* Problems and questions related to space monitoring deployment can be sent to hn-cms-comp-ops@cern.ch.<br> In case of problems, please open a [[https://github.com/dmwm/DMWMMON/issues][DMWMMON github issue]] .<br> We may ask you to provide your storage-dump for us to validate and tune the tools with, so do not delete it yet. ---+++ Q: How long until CMS asks for a new way to monitor site usage? -- Main.NataliaRatnikova - 14 Feb 2014
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r82
|
r80
<
r79
<
r78
<
r77
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r78 - 2017-04-05
-
TobiasPook
Log In
CMSPublic
CMSPublic Web
CMSPrivate Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
---
SpaceMon Homepage
Site Admins
Central Admins
Users
Operations
Developers
Edit left bar
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Cern Search
TWiki Search
Google Search
CMSPublic
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback