Difference: CernLbdDiskSpace (1 vs. 32)

Revision 322012-01-26 - ThomasRuf

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 32 to 33
 
    • All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space softlinked from their home directory.
    • Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.
    • fs listquota to get the quota/usage, use separately in each scratch space
Changed:
<
<
    • To obtain this increase contact RobLambert, who will forward your request
>
>
    • To obtain this increase contact Thomas Ruf , who will forward your request
 
  • Castor Space:
    • All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.
Line: 65 to 66
 
    • to see the usage call fs listquota on each volume
    • The access list is maintained as a subgroup of z5, z5:cernlbd, to see if you are there call pts mem z5:cernlbd
    • To see the usage and check for problems call: /afs/cern.ch/project/lbcern/scripts/check1TB.py
Changed:
<
<
    • To be added to the acces list contact RobLambert
    • to change the access list of directories you have created, you need to be added to admin your own directories, contact RobLambert for that
>
>
    • To be added to the acces list contact Thomas Ruf
    • to change the access list of directories you have created, you need to be added to admin your own directories, contact Thomas Ruf for that
 
  • t3 Castor space:
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, however it does not count towards your Grid quota.
Line: 84 to 85
 

-Access:

Changed:
<
<
The access list of the t3 castor is maintained by Joel Closier. If you find yourself without access, you should get in touch with RobLambert or ThomasRuf. To see the access list call stager_listprivileges -S lhcbt3. Any lbd members should be granted access.
>
>
The access list of the t3 castor is maintained by Joel Closier. If you find yourself without access, you should get in touch with ThomasRuf. To see the access list call stager_listprivileges -S lhcbt3. Any lbd members should be granted access.
 

-How to use:

Line: 106 to 105
 
lhcbt3_mv.py Move files between stagers from a given directory or list /afs/cern.ch/project/lbcern/scripts/lhcbt3_mv.py <target> <svc1> [<svc2>]
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
lhcbt3_cp.py Copy new file, or a list of new files to lhcbt31 /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py <source(s)> <destination>
Changed:
<
<
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
>
>
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
 
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element (cached from the night before) /afs/cern.ch/project/lbcern/scripts/check30TB.py
  1. lhcbt3_cp.py avoids the need to set STAGE_SVCCLASS because it spawns a subshell
Changed:
<
<
  1. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could be staged into the non-standard service class, causing bookkeeping problems
>
>
  1. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could be staged into the non-standard service class, causing bookkeeping problems
 

-Use in Ganga

Line: 121 to 120
 [LHCb] cp_cmd = /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py
Changed:
<
<
  1. If you want to access files on, stage files to and write files to lhcbt3, if and only if you are running ganga on the LSF backend, then you can edit the line in your .gangarc:
>
>
  1. If you want to access files on, stage files to and write files to lhcbt3, if and only if you are running ganga on the LSF backend, then you can edit the line in your .gangarc:
 [LSF] preexecute = import os os.environ['STAGE_SVCCLASS']='lhcbt3'
Line: 134 to 132
  If you are using a significant fraction of the space and somebody else needs some, you will be asked to clean up.
Changed:
<
<
Removal in this respect is done sporadically or on request by RobLambert at the moment, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
>
>
Removal in this respect is done sporadically or on request by Thomas Ruf at the moment, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
 

-Other service classes

Line: 218 to 215
 

Can I share my afs disk space with other people?

Changed:
<
<
Yes, this should be possible. You should have admin rights on your own directory, call fs listacl. If not, contact RobLambert to fix. Then you should be able to modify your directories to be readable or writable by others, but note that if you make them writable by others, anything they write will count against your usage.
>
>
Yes, this should be possible. You should have admin rights on your own directory, call fs listacl. If not, contact Thomas Ruf to fix. Then you should be able to modify your directories to be readable or writable by others, but note that if you make them writable by others, anything they write will count against your usage.
  To modify the access list you will need to do fs setacl <directory> <user or group> <setting>, e.g. fs setacl mydirectory myfreindbob read
Line: 234 to 231
  See CernLbdDiskManagement.
Changed:
<
<

>
>

  -- RobLambert - 11-Nov-2010

Revision 312011-08-18 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 220 to 220
  Yes, this should be possible. You should have admin rights on your own directory, call fs listacl. If not, contact RobLambert to fix. Then you should be able to modify your directories to be readable or writable by others, but note that if you make them writable by others, anything they write will count against your usage.
Changed:
<
<
To modify the access list you will need to do fs setacl <directory> <user or group> <setting>
>
>
To modify the access list you will need to do fs setacl <directory> <user or group> <setting>, e.g. fs setacl mydirectory myfreindbob read
  Once you have modified a top level directory, you will need to propagate it to all sub directories where you want that access. To do that a simple way is cp -r since new directories use new permissions, but if that is not possible you will have to recursively change all the permissions for example:

Revision 302011-08-16 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 222 to 222
  To modify the access list you will need to do fs setacl <directory> <user or group> <setting>
Changed:
<
<
Once you have modified a top level directory, you will need to propagate it to all sub directories where you want that access. To do that a simple way is cp -r since new directories use new permissions, but if that is not possible you will have to recursively change all the permissions.
>
>
Once you have modified a top level directory, you will need to propagate it to all sub directories where you want that access. To do that a simple way is cp -r since new directories use new permissions, but if that is not possible you will have to recursively change all the permissions for example:

find <mytopdir> -type d -exec fs sa {} cern:z5 read \;
  For more on acl see here: http://docs.openafs.org/Reference/1/fs_setacl.html

Revision 292011-06-08 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 138 to 138
 

-Other service classes

Changed:
<
<
  • lhcbdata, self-explanatory
>
>
  • lhcbdisk, self-explanatory
  • lhcbtape, self explanitory
 
  • lhcbuser, disk storage of user-generated grid files

Revision 282011-05-31 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 60 to 60
 
    • the current size is 1TB, divided into 10 volumes of 100GB each
    • Keep your items inside a directory with your name, anything outside a named directory will be deleted.
    • mkdir /afs/cern.ch/project/lbcern/vol<X>/<username>
Added:
>
>
    • Just your username, the same as you have it on afs, with no embellishments of addendums please
 
    • Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.
    • to see the usage call fs listquota on each volume
    • The access list is maintained as a subgroup of z5, z5:cernlbd, to see if you are there call pts mem z5:cernlbd
Line: 110 to 111
 
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element (cached from the night before) /afs/cern.ch/project/lbcern/scripts/check30TB.py
  1. lhcbt3_cp.py avoids the need to set STAGE_SVCCLASS because it spawns a subshell
Changed:
<
<
  1. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could go there, causing bookkeeping problems
>
>
  1. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could be staged into the non-standard service class, causing bookkeeping problems
 

-Use in Ganga

Revision 272011-05-30 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 220 to 220
  To modify the access list you will need to do fs setacl <directory> <user or group> <setting>
Added:
>
>
Once you have modified a top level directory, you will need to propagate it to all sub directories where you want that access. To do that a simple way is cp -r since new directories use new permissions, but if that is not possible you will have to recursively change all the permissions.
 For more on acl see here: http://docs.openafs.org/Reference/1/fs_setacl.html

Management Instructions:

Revision 262011-05-30 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 205 to 205
  If you stage out of lhcbuser the files will probably just be staged back
Changed:
<
<
  • the LFN to PFN conversion done by ganga and bookkeeping etc includes the service class. This will be lhcbdata or lhcbuser
>
>
  • the LFN to PFN conversion done by ganga and bookkeeping etc includes the service class. This will be lhcbdata or lhcbuser or lhcbdisk
 
  • that means if you use the catalog conversion you will then create a staging request into the old service, and thus duplicate the file
  • To avoid this, do the LFN to PFN conversion yourself, either by starting with PFNs or using the guessPFN function in ganga utils
  • If anyone else uses these files, they would also have to do the same trick, otherwise they will double your footprint.
Line: 214 to 214
  Yes. Stage into one service class, stage out of the other service class. Use lhcbt3_mv for that. But be aware the files might just come back (see above).
Added:
>
>

Can I share my afs disk space with other people?

Yes, this should be possible. You should have admin rights on your own directory, call fs listacl. If not, contact RobLambert to fix. Then you should be able to modify your directories to be readable or writable by others, but note that if you make them writable by others, anything they write will count against your usage.

To modify the access list you will need to do fs setacl <directory> <user or group> <setting>

For more on acl see here: http://docs.openafs.org/Reference/1/fs_setacl.html

 

Management Instructions:

See CernLbdDiskManagement.

Revision 252011-03-08 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 102 to 102
 
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
lhcbt3_rm.py Remove all files from a given user, directory or list /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <target>
Added:
>
>
lhcbt3_mv.py Move files between stagers from a given directory or list /afs/cern.ch/project/lbcern/scripts/lhcbt3_mv.py <target> <svc1> [<svc2>]
 
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
lhcbt3_cp.py Copy new file, or a list of new files to lhcbt31 /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py <source(s)> <destination>
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
Line: 115 to 116
  Well, you could set the variable STAGE_SVCCLASS in your bashrc, but that would be problematic, since you would then stage into lhcbt3 any file which you touched on castor, and it's very difficult to book-keep that.
Changed:
<
<
If you only want to write your files to lhcbt3, instead of read files from there or stage other random files to there, then edit in your .gangarc:
>
>
  1. If you only want to write your files to lhcbt3, instead of read files from there or stage other random files to there, then edit in your .gangarc:
 [LHCb] cp_cmd = /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py
Changed:
<
<
If you want to access files on, stage files to and write files to lhcbt3, if and only if you are running ganga on the LSF backend, then you can edit the line in your .gangarc:
>
>
  1. If you want to access files on, stage files to and write files to lhcbt3, if and only if you are running ganga on the LSF backend, then you can edit the line in your .gangarc:
 [LSF] preexecute = import os os.environ['STAGE_SVCCLASS']='lhcbt3'
Added:
>
>
    • You can also edit the config inside ganga dynamically to do this.
  1. Use PFNs which end in ?svcClass=<svcclass>, ganga utils' guessPFN function can add this for you.
 
Deleted:
<
<
You can also edit the config inside ganga dynamically to do this.
 

-Cleanup

Line: 138 to 135
  Removal in this respect is done sporadically or on request by RobLambert at the moment, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
Added:
>
>

-Other service classes

  • lhcbdata, self-explanatory
  • lhcbuser, disk storage of user-generated grid files

 

-FAQ

Does staging into lhcbt3 remove the staged copy from elsewhere?

Line: 193 to 196
  There is no real difference between CERN-USER and lhcbt3, they are both permanently-staged CERN castor disk-pools
Changed:
<
<
Above I describe how to send files from LSF or Local backends directly to lhcbt3 using Ganga Use in Ganga.

With the Dirac backend, your jobs could run anywhere in the world, and the output will usually be uploaded to the local grid storage element of the site at which they run.

You can either replicate them to CERN-USER or redirect the output there, then the files will be at CERN. (see Grid Resources)

>
>
  • Above I describe how to send files from LSF or Local backends directly to lhcbt3 using Ganga Use in Ganga.
  • With the Dirac backend, your jobs could run anywhere in the world, and the output will usually be uploaded to the local grid storage element of the site at which they run.
  • You can either replicate them to CERN-USER or redirect the output there, then the files will be at CERN. (see Grid Resources)
  • There is only one CERN castor name server, so there is no real distinction between files on castor apart from where they are staged.
  • Once they are on CERN castor you can stage them into lhcbt3 diskpool using the above commands, but both the CERN-USER and lhcbt3 are permanently staged to disk, so this will just double your file footprint.
  • You could always stager_rm them from the other disk pools, but probably you are fine just leaving them where they are.

If you stage out of lhcbuser the files will probably just be staged back

  • the LFN to PFN conversion done by ganga and bookkeeping etc includes the service class. This will be lhcbdata or lhcbuser
  • that means if you use the catalog conversion you will then create a staging request into the old service, and thus duplicate the file
  • To avoid this, do the LFN to PFN conversion yourself, either by starting with PFNs or using the guessPFN function in ganga utils
  • If anyone else uses these files, they would also have to do the same trick, otherwise they will double your footprint.
 
Changed:
<
<
There is only one CERN castor name server, so there is no real distinction between files on castor apart from where they are staged.
>
>

Can I swap files between service classes?

 
Changed:
<
<
Once they are on CERN castor you can stage them into lhcbt3 diskpool using the above commands, but both the CERN-USER and lhcbt3 are permanently staged to disk, so this will just double your file footprint. You could always stager_rm them from the other disk pools, but probably you are fine just leaving them where they are.
>
>
Yes. Stage into one service class, stage out of the other service class. Use lhcbt3_mv for that. But be aware the files might just come back (see above).
 

Management Instructions:

Revision 242011-02-12 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 91 to 91
  lhcbt3 is a Castor disk pool, to move things there you need a different service class. Scripts to help you out with management are kept in /afs/cern.ch/project/lbcern/scripts/
Changed:
<
<
To add large sets of files from other people, and/or files from central productions, it's best to use the lhcbt3_stage.py script which (soon) will document this, account this against you, and so not be accidentally unstaged by the manager.
>
>
To add large sets of files from other people, and/or files from central productions, it's best to use the lhcbt3_stage.py they will be accounted against you. In the case you add files from another user, this is very very annoying, because then the only person who can remove them is Joel by issuing a specific command from a specific machine.
 
Command Use Example
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
Line: 189 to 189
  and look for a line starting with "Backup". If you see a date there, the volume is backed up. If you see "Never", it is not.
Added:
>
>

Can I send files to castor lhcbt3 from DIRAC?

There is no real difference between CERN-USER and lhcbt3, they are both permanently-staged CERN castor disk-pools

Above I describe how to send files from LSF or Local backends directly to lhcbt3 using Ganga Use in Ganga.

With the Dirac backend, your jobs could run anywhere in the world, and the output will usually be uploaded to the local grid storage element of the site at which they run.

You can either replicate them to CERN-USER or redirect the output there, then the files will be at CERN. (see Grid Resources)

There is only one CERN castor name server, so there is no real distinction between files on castor apart from where they are staged.

Once they are on CERN castor you can stage them into lhcbt3 diskpool using the above commands, but both the CERN-USER and lhcbt3 are permanently staged to disk, so this will just double your file footprint. You could always stager_rm them from the other disk pools, but probably you are fine just leaving them where they are.

 

Management Instructions:

Revision 232011-01-07 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 185 to 185
  However, this is a CERN only naming convention. To check a particular volume run
Changed:
<
<
vos exa <volume name>
>
>
/usr/sbin/vos exa <volume name>
  and look for a line starting with "Backup". If you see a date there, the volume is backed up. If you see "Never", it is not.

Revision 222011-01-07 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 119 to 119
 
[LHCb]
Changed:
<
<
cp_cmd = /afs/.../lhcbt3_cp.py
>
>
cp_cmd = /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py
 
Changed:
<
<
If you want to access files on, stage files to and write files to lhcbt3, but only while you are running on LSF, then you can edit the line in your .gangarc:
>
>
If you want to access files on, stage files to and write files to lhcbt3, if and only if you are running ganga on the LSF backend, then you can edit the line in your .gangarc:
 
[LSF]
Line: 138 to 138
  Removal in this respect is done sporadically or on request by RobLambert at the moment, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
Changed:
<
<

-FAQ

>
>

-FAQ

 
Changed:
<
<

Does staging into lhcbt3 remove the staged copy from elsewhere?

>
>

Does staging into lhcbt3 remove the staged copy from elsewhere?

  No.
Line: 154 to 154
 /castor/...../stdout STAGED
Changed:
<
<

Can a file staged to lhcbt3 be read without setting the STAGE_SVCCLASS?

>
>

Can a file staged to lhcbt3 be read without setting the STAGE_SVCCLASS?

  Touching the file without setting the environment variable will cause the file to stage itself onto the default disk pools, copying from lhcbt3.

This is much quicker than staging from tape, but it is probably not what you want to do.

Changed:
<
<

Does unstaging the file delete it forever?

>
>

Does unstaging the file delete it forever?

  No, technically the disk pool is still backed up to tape, but according to the experts:
Line: 169 to 169
  I have no idea what that means...
Changed:
<
<

I unstaged the file, but it still appears if I nsls it!

>
>

I unstaged the file, but it still appears if I nsls it!

  Technically the disk pool is still backed up to tape, and the file is still registered in the castor name server (NS) which is why nsls shows it.

To remove it completely you need to rfrm, just like every other file in castor.

Added:
>
>

How can I check if a volume is backed up or not?

in terms of backup, there are only two types of volumes: backed-up ones and non-backed up ones.

Rule-of-thumb volumes that have names starting with:

  • p., .u, .user: are backed up
  • q., s.: not backed up

However, this is a CERN only naming convention. To check a particular volume run

vos exa <volume name>

and look for a line starting with "Backup". If you see a date there, the volume is backed up. If you see "Never", it is not.

 

Management Instructions:

See CernLbdDiskManagement.

Revision 212010-12-20 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 136 to 136
  If you are using a significant fraction of the space and somebody else needs some, you will be asked to clean up.
Changed:
<
<
In case there are randomly staged files from outside you user area, not in a blame file, then they may be removed.

Removal in this respect will probably be a cron job once per week, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.

>
>
Removal in this respect is done sporadically or on request by RobLambert at the moment, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
 

-FAQ

Revision 202010-12-14 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 68 to 68
 
    • to change the access list of directories you have created, you need to be added to admin your own directories, contact RobLambert for that

  • t3 Castor space:
Changed:
<
<
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.
>
>
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, however it does not count towards your Grid quota.
 
    • It is technically tied to the lhcbt3
    • The current size is 30TB.
    • You can monitor the status here
Line: 79 to 79
 

-Description:

Changed:
<
<
The 30TB t3 castor disk pool is very special. It is a stager pool of disks for castor. Usually when you upload files to your castor working area, they are backed up on tape, and available on a staged disk for a short time. After which time the staged disk copy is deleted, and they need to be staged again. The lhcbt3 is a castor staging pool, files can be copied to there by being staged to there. In which case they will be backed up. Or, if they are created once and only on that space, they will not be backed up to tape.
>
>
The 30TB t3 castor disk pool is very special. It is a stager pool of disks for castor. Usually when you upload files to your castor working area, they are backed up on tape, and available on a staged disk for a short time. After which time the staged disk copy is deleted, and they need to be staged again. The lhcbt3 is a castor staging pool, files can be copied to there by being staged to there.
 

-Access:

Line: 99 to 99
 
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
Check usage of lhcbt3 stager_qry -S lhcbt3 -sH
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
Changed:
<
<
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
>
>
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
 
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
lhcbt3_rm.py Remove all files from a given user, directory or list /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <target>
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
Line: 107 to 107
 
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) /afs/cern.ch/project/lbcern/scripts/castorQuota.py
Changed:
<
<
check30TB.py Disk usage of all users of lhcbt3 castor element /afs/cern.ch/project/lbcern/scripts/check30TB.py
>
>
check30TB.py Disk usage of all users of lhcbt3 castor element (cached from the night before) /afs/cern.ch/project/lbcern/scripts/check30TB.py
 
  1. lhcbt3_cp.py avoids the need to set STAGE_SVCCLASS because it spawns a subshell
  2. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could go there, causing bookkeeping problems
Line: 156 to 156
 /castor/...../stdout STAGED
Added:
>
>

Can a file staged to lhcbt3 be read without setting the STAGE_SVCCLASS?

Touching the file without setting the environment variable will cause the file to stage itself onto the default disk pools, copying from lhcbt3.

This is much quicker than staging from tape, but it is probably not what you want to do.

Does unstaging the file delete it forever?

No, technically the disk pool is still backed up to tape, but according to the experts:

There is tape backing (for tape fileclass files) for recovery purposes but not enough tape resources for an aggressive use of the disk as a cache to tape.

I have no idea what that means...

I unstaged the file, but it still appears if I nsls it!

Technically the disk pool is still backed up to tape, and the file is still registered in the castor name server (NS) which is why nsls shows it.

To remove it completely you need to rfrm, just like every other file in castor.

 

Management Instructions:

See CernLbdDiskManagement.

Revision 192010-12-14 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 37 to 37
 
  • Castor Space:
    • All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.
    • After a while files will be stored on tape, and take some time to stage.
Changed:
<
<
    • To see your usage, run: python ~rlambert/public/LBD/castorQuota.py
>
>
    • To see your usage, run: /afs/cern.ch/project/lbcern/scripts/castorQuota.py
 

Grid Resources:

Line: 47 to 47
 
    • your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota
    • It is permanately staged to disk.
    • Use it to store large ntuples and DSTs that you got from the grid, which you want to share with others in the collaboration and which you don't want to dissapear back to tape.
Changed:
<
<
    • To see your usage proportion, run: python ~rlambert/public/LBD/castorQuota.py
>
>
    • To see your usage proportion, run: /afs/cern.ch/project/lbcern/scripts/castorQuota.py
 

Group Resources:

Line: 63 to 63
 
    • Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.
    • to see the usage call fs listquota on each volume
    • The access list is maintained as a subgroup of z5, z5:cernlbd, to see if you are there call pts mem z5:cernlbd
Changed:
<
<
    • To see the usage and check for problems call: python  ~rlambert/public/LBD/check1TB.py
>
>
    • To see the usage and check for problems call: /afs/cern.ch/project/lbcern/scripts/check1TB.py
 
    • To be added to the acces list contact RobLambert
    • to change the access list of directories you have created, you need to be added to admin your own directories, contact RobLambert for that
Line: 72 to 72
 
    • It is technically tied to the lhcbt3
    • The current size is 30TB.
    • You can monitor the status here
Added:
>
>
    • Once per night all files are dumped to a big list here
 
    • The commands for checking quota, access etc. are given below

More on the Castor disk pool:

Line: 88 to 89
 

-How to use:

Changed:
<
<
lhcbt3 is a Castor disk pool, to move things there you need a different service class.
>
>
lhcbt3 is a Castor disk pool, to move things there you need a different service class. Scripts to help you out with management are kept in /afs/cern.ch/project/lbcern/scripts/
  To add large sets of files from other people, and/or files from central productions, it's best to use the lhcbt3_stage.py script which (soon) will document this, account this against you, and so not be accidentally unstaged by the manager.
Line: 98 to 99
 
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
Check usage of lhcbt3 stager_qry -S lhcbt3 -sH
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
Changed:
<
<
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
>
>
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
 
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
Changed:
<
<
lhcbt3_rm.py Remove all files from a given user, directory or list python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <target>
>
>
lhcbt3_rm.py Remove all files from a given user, directory or list /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <target>
 
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
Changed:
<
<
lhcbt3_cp.py Copy new file, or a list of new files to lhcbt31 python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py <source(s)> <destination>
>
>
lhcbt3_cp.py Copy new file, or a list of new files to lhcbt31 /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py <source(s)> <destination>
 
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
Changed:
<
<
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element python  /afs/cern.ch/project/lbcern/scripts/check30TB.py
>
>
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element /afs/cern.ch/project/lbcern/scripts/check30TB.py
 
  1. lhcbt3_cp.py avoids the need to set STAGE_SVCCLASS because it spawns a subshell
Changed:
<
<
  1. It's not a good idea to set this, because if you forget to unset it, then every file you touch could go there, causing bookkeeping problems
>
>
  1. It's not a great idea to set STAGE_SVCCLASS, because if you forget to unset it, then every file you touch could go there, causing bookkeeping problems

-Use in Ganga

Well, you could set the variable STAGE_SVCCLASS in your bashrc, but that would be problematic, since you would then stage into lhcbt3 any file which you touched on castor, and it's very difficult to book-keep that.

If you only want to write your files to lhcbt3, instead of read files from there or stage other random files to there, then edit in your .gangarc:

[LHCb]
cp_cmd = /afs/.../lhcbt3_cp.py

If you want to access files on, stage files to and write files to lhcbt3, but only while you are running on LSF, then you can edit the line in your .gangarc:

[LSF]
preexecute = import os
  os.environ['STAGE_SVCCLASS']='lhcbt3'

You can also edit the config inside ganga dynamically to do this.

 

-Cleanup

Line: 118 to 140
  Removal in this respect will probably be a cron job once per week, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.
Added:
>
>

-FAQ

Does staging into lhcbt3 remove the staged copy from elsewhere?

No.

$ stager_qry -M /castor/.../stdout
/castor/...../stdout STAGED
$ stager_get -M ...../stdout -S lhcbt3
$ stager_qry -M /castor/.../stdout
/castor/...../stdout STAGED
$ stager_qry -M /castor/.../stdout -S lhcbt3
/castor/...../stdout STAGED
 

Management Instructions:

See CernLbdDiskManagement.

Revision 182010-12-13 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 98 to 98
 
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
Check usage of lhcbt3 stager_qry -S lhcbt3 -sH
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
Changed:
<
<
lhcbt3_stage.py Stage all files from a given directory or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <castor file or directory, or file containing list to add>
>
>
lhcbt3_stage.py Stage existing castor file(s) from a given file, directory, or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <source>
 
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
Changed:
<
<
lhcbt3_rm.py Remove all files from a given user, directory or list python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <directory or username or file with list to remove>
>
>
lhcbt3_rm.py Remove all files from a given user, directory or list python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <target>
 
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
Changed:
<
<
STAGE_SVCCLASS Set lhcbt3 as your default staging pool1 export STAGE_SVCCLASS="lhcbt3"
>
>
lhcbt3_cp.py Copy new file, or a list of new files to lhcbt31 python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_cp.py <source(s)> <destination>
STAGE_SVCCLASS Set lhcbt3 as your default staging pool unsafe2 export STAGE_SVCCLASS="lhcbt3"
 
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element python  /afs/cern.ch/project/lbcern/scripts/check30TB.py
Changed:
<
<
  1. It's not a good idea to set this permanently in your bashrc, because then every file you touch could go there, causing bookkeeping problems
>
>
  1. lhcbt3_cp.py avoids the need to set STAGE_SVCCLASS because it spawns a subshell
  2. It's not a good idea to set this, because if you forget to unset it, then every file you touch could go there, causing bookkeeping problems
 

-Cleanup

Revision 172010-12-13 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 71 to 71
 
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.
    • It is technically tied to the lhcbt3
    • The current size is 30TB.
Added:
>
>
    • You can monitor the status here
 
    • The commands for checking quota, access etc. are given below

More on the Castor disk pool:

Line: 95 to 96
 
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
List files from a directory stager_qry -S lhcbt3 -M <adirectory>
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
Added:
>
>
Check usage of lhcbt3 stager_qry -S lhcbt3 -sH
 
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
lhcbt3_stage.py Stage all files from a given directory or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <castor file or directory, or file containing list to add>
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3

Revision 162010-12-13 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 96 to 96
 
List files from a directory stager_qry -S lhcbt3 -M <adirectory>
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
Changed:
<
<
lhcbt3_stage.py Stage all files from a given directory or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  ~rlambert/public/LBD/lhcbt3_stage.py <castor file or directory, or file containing list to add>
>
>
lhcbt3_stage.py Stage all files from a given directory or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_stage.py <castor file or directory, or file containing list to add>
 
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
Changed:
<
<
lhcbt3_rm.py Remove all files from a given user, directory or list python  ~rlambert/public/LBD/lhcbt3_rm.py <directory or username or file with list to remove>
>
>
lhcbt3_rm.py Remove all files from a given user, directory or list python  /afs/cern.ch/project/lbcern/scripts/lhcbt3_rm.py <directory or username or file with list to remove>
 
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
STAGE_SVCCLASS Set lhcbt3 as your default staging pool1 export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
Changed:
<
<
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  ~rlambert/public/LBD/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element python  ~rlambert/public/LBD/check30TB.py
>
>
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  /afs/cern.ch/project/lbcern/scripts/castorQuota.py
check30TB.py Disk usage of all users of lhcbt3 castor element python  /afs/cern.ch/project/lbcern/scripts/check30TB.py
 
  1. It's not a good idea to set this permanently in your bashrc, because then every file you touch could go there, causing bookkeeping problems

-Cleanup

Revision 152010-12-13 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 89 to 89
  lhcbt3 is a Castor disk pool, to move things there you need a different service class.
Added:
>
>
To add large sets of files from other people, and/or files from central productions, it's best to use the lhcbt3_stage.py script which (soon) will document this, account this against you, and so not be accidentally unstaged by the manager.
 
Command Use Example
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
List files from a directory stager_qry -S lhcbt3 -M <adirectory>
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
Changed:
<
<
stager_get Stage existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
>
>
stager_get Stage single existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3 -U $USER
lhcbt3_stage.py Stage all files from a given directory or list. Makes a blame file to make sure this is accounted against you, and not accidentally unstaged. python  ~rlambert/public/LBD/lhcbt3_stage.py <castor file or directory, or file containing list to add>
stager_rm Remove single file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
lhcbt3_rm.py Remove all files from a given user, directory or list python  ~rlambert/public/LBD/lhcbt3_rm.py <directory or username or file with list to remove>
 
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
STAGE_SVCCLASS Set lhcbt3 as your default staging pool1 export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  ~rlambert/public/LBD/castorQuota.py
Changed:
<
<
check30TB.py Disk usage of lhcbt3 castor element python  ~rlambert/public/LBD/check30TB.py
lhcbt3_rm.py Remove all files from a given user or directory python  ~rlambert/public/LBD/lhcbt3_rm.py <directory or username or file with list to remove>
>
>
check30TB.py Disk usage of all users of lhcbt3 castor element python  ~rlambert/public/LBD/check30TB.py
 
  1. It's not a good idea to set this permanently in your bashrc, because then every file you touch could go there, causing bookkeeping problems
Added:
>
>

-Cleanup

If you are using a significant fraction of the space and somebody else needs some, you will be asked to clean up.

In case there are randomly staged files from outside you user area, not in a blame file, then they may be removed.

Removal in this respect will probably be a cron job once per week, only cleaning up if there is more than 2TB used this way, or if the files are more than 50% of the free space.

 

Management Instructions:

See CernLbdDiskManagement.

Revision 142010-12-11 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 96 to 96
 
stager_get Stage existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
Changed:
<
<
STAGE_SVCCLASS Set lhcbt3 as your default staging pool export STAGE_SVCCLASS="lhcbt3"
>
>
STAGE_SVCCLASS Set lhcbt3 as your default staging pool1 export STAGE_SVCCLASS="lhcbt3"
 
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  ~rlambert/public/LBD/castorQuota.py
check30TB.py Disk usage of lhcbt3 castor element python  ~rlambert/public/LBD/check30TB.py
lhcbt3_rm.py Remove all files from a given user or directory python  ~rlambert/public/LBD/lhcbt3_rm.py <directory or username or file with list to remove>
Changed:
<
<
>
>
  1. It's not a good idea to set this permanently in your bashrc, because then every file you touch could go there, causing bookkeeping problems
 

Management Instructions:

Revision 132010-12-11 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Added:
>
>
This Twiki corresponds to the talk given here
 
Line: 34 to 37
 
  • Castor Space:
    • All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.
    • After a while files will be stored on tape, and take some time to stage.
Added:
>
>
    • To see your usage, run: python ~rlambert/public/LBD/castorQuota.py
 

Grid Resources:

Line: 43 to 47
 
    • your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota
    • It is permanately staged to disk.
    • Use it to store large ntuples and DSTs that you got from the grid, which you want to share with others in the collaboration and which you don't want to dissapear back to tape.
Added:
>
>
    • To see your usage proportion, run: python ~rlambert/public/LBD/castorQuota.py
 

Group Resources:

Line: 66 to 71
 
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.
    • It is technically tied to the lhcbt3
    • The current size is 30TB.
Changed:
<
<
    • This is to be used to share files locally, if there is some reason the files cannot sit on the Grid, and they need to be permanently staged.
    • See the total usage here: lhcbt3-castor-monitoring, or use stager_qry -S lhcbt3 -s
    • To see your usage proportion, run: python ~rlambert/public/LBD/castorQuota.py
    • To for each individual file you can tell if it is on lhcbt3 or another system by calling stager_qry -S lhcbt3 -M /castor/cern.ch/user/...
    • To see the access list call stager_listprivileges -S lhcbt3
>
>
    • The commands for checking quota, access etc. are given below
 

More on the Castor disk pool:

Line: 90 to 91
 
Command Use Example
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
Added:
>
>
List files from a directory stager_qry -S lhcbt3 -M <adirectory>
List all your files stager_qry -S lhcbt3 -M /castor/cern.ch/user/<u>/<uname> -M /castor/cern.ch/grid/lhcb/user/<u>/<uname>
 
stager_get Stage existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
STAGE_SVCCLASS Set lhcbt3 as your default staging pool export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
Changed:
<
<
castorQuota.py Disk usage from you of lhcbt3, or of a certain directory python  ~rlambert/public/LBD/castorQuota.py
>
>
castorQuota.py Disk usage from you of all castor elements, including lhcbt3 (can be slow) python  ~rlambert/public/LBD/castorQuota.py
check30TB.py Disk usage of lhcbt3 castor element python  ~rlambert/public/LBD/check30TB.py
lhcbt3_rm.py Remove all files from a given user or directory python  ~rlambert/public/LBD/lhcbt3_rm.py <directory or username or file with list to remove>
 

Management Instructions:

Revision 122010-12-10 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 80 to 80
 

-Access:

Changed:
<
<
The access list of the t3 castor is maintained the same as that of the lhcbt3 itself. If you find yourself without access, you should get in touch with RobLambert or ThomasRuf. To see the access list call stager_listprivileges -S lhcbt3.
>
>
The access list of the t3 castor is maintained by Joel Closier. If you find yourself without access, you should get in touch with RobLambert or ThomasRuf. To see the access list call stager_listprivileges -S lhcbt3. Any lbd members should be granted access.
 

-How to use:

Revision 112010-12-01 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 25 to 25
 

Personal Resources:

  • Afs Home directories:
Changed:
<
<
    • All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space linked from there.
>
>
    • All CERN users can get a 1GB Afs home directory, sans approval here
    • All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space softlinked from their home directory.
 
    • Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.
    • fs listquota to get the quota/usage, use separately in each scratch space
    • To obtain this increase contact RobLambert, who will forward your request

Revision 102010-11-24 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 37 to 37
 

Grid Resources:

  • CERN-USER:
Changed:
<
<
    • LHCb has a large amoiunt of disk space shared between all collaborators at each site
>
>
    • LHCb has a large amount of disk space shared between all collaborators at each site
 
    • The CERN-USER disk, is the element located at CERN (T0)
    • your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota
    • It is permanately staged to disk.

Revision 92010-11-22 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 58 to 58
 
    • to see the usage call fs listquota on each volume
    • The access list is maintained as a subgroup of z5, z5:cernlbd, to see if you are there call pts mem z5:cernlbd
    • To see the usage and check for problems call: python  ~rlambert/public/LBD/check1TB.py
Changed:
<
<
>
>
    • To be added to the acces list contact RobLambert
    • to change the access list of directories you have created, you need to be added to admin your own directories, contact RobLambert for that
 
  • t3 Castor space:
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.

Revision 82010-11-19 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 68 to 68
 
    • See the total usage here: lhcbt3-castor-monitoring, or use stager_qry -S lhcbt3 -s
    • To see your usage proportion, run: python ~rlambert/public/LBD/castorQuota.py
    • To for each individual file you can tell if it is on lhcbt3 or another system by calling stager_qry -S lhcbt3 -M /castor/cern.ch/user/...
Added:
>
>
    • To see the access list call stager_listprivileges -S lhcbt3
 

More on the Castor disk pool:

Line: 78 to 79
 

-Access:

The access list of the t3 castor is maintained the same as that of the lhcbt3 itself. If you find yourself without access, you should get in touch with RobLambert or ThomasRuf.

Added:
>
>
To see the access list call stager_listprivileges -S lhcbt3.
 

-How to use:

Line: 87 to 90
 
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
stager_get Stage existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
Added:
>
>
stager_listprivileges To see the access list stager_listprivileges -S lhcbt3
 
STAGE_SVCCLASS Set lhcbt3 as your default staging pool export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of lhcbt3, or of a certain directory python  ~rlambert/public/LBD/castorQuota.py

Revision 72010-11-19 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 12 to 12
 
Where What? Size Backed-up Staging? path
Afs home Directory Ganga job repository, critical code you're working on, all other small files 2GB Yes   ~
Afs scratch directory Ganga job workspace, temporary ntuples, noncritical code 3GB No   ~/scratch*
Changed:
<
<
Group scratch Ganga job workspace, temporary ntuples, temporary DSTs 1TB No   /afs/cern.ch/lhcb/group/lbd
>
>
Group scratch Ganga job workspace, temporary ntuples, temporary DSTs 1TB No   /afs/cern.ch/project/lbcern
 
User castor backup tars, large ntuples to share, old ntuples ??TB Yes Tape /castor/cern.ch/user/<a>/<another>
Grid castor Ntuples from the grid, Selected DSTs 5TB Yes Disk /castor/cern.ch/user/grid/lhcb/user/<a>/<another>
Group castor Output from lhcbt3 jobs, large numbers of ntuples and DSTs 30TB Yes and No Disk /castor/cern.ch/<somepath>
Line: 48 to 48
 The group disk resources are to be used only if the above available resources are not appropriate. It is maintained as a shared use policy, if you are using too high a proportion you will be asked to cut down.

  • Afs group disk:
Changed:
<
<
    • /afs/cern.ch/lhcb/group/lbd
>
>
    • /afs/cern.ch/project/lbcern
 
    • We have a group disk on Afs.
    • It is not backed up, it is a volatile temporary storage disk.
Changed:
<
<
    • the current size is 1TB
>
>
    • the current size is 1TB, divided into 10 volumes of 100GB each
    • Keep your items inside a directory with your name, anything outside a named directory will be deleted.
    • mkdir /afs/cern.ch/project/lbcern/vol<X>/<username>
 
    • Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.
Changed:
<
<
    • Keep your items inside a directory with your name, anything outside a named directory will be deleted.
>
>
    • to see the usage call fs listquota on each volume
    • The access list is maintained as a subgroup of z5, z5:cernlbd, to see if you are there call pts mem z5:cernlbd
    • To see the usage and check for problems call: python  ~rlambert/public/LBD/check1TB.py
    • To be added contact RobLambert
 
  • t3 Castor space:
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.
Line: 86 to 91
 
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
castorQuota.py Disk usage from you of lhcbt3, or of a certain directory python  ~rlambert/public/LBD/castorQuota.py
Added:
>
>

Management Instructions:

See CernLbdDiskManagement.

 

-- RobLambert - 11-Nov-2010

Revision 62010-11-16 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Revision 52010-11-15 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 61 to 61
 
    • The current size is 30TB.
    • This is to be used to share files locally, if there is some reason the files cannot sit on the Grid, and they need to be permanently staged.
    • See the total usage here: lhcbt3-castor-monitoring, or use stager_qry -S lhcbt3 -s
Added:
>
>
    • To see your usage proportion, run: python ~rlambert/public/LBD/castorQuota.py
 
    • To for each individual file you can tell if it is on lhcbt3 or another system by calling stager_qry -S lhcbt3 -M /castor/cern.ch/user/...

More on the Castor disk pool:

Line: 83 to 84
 
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
STAGE_SVCCLASS Set lhcbt3 as your default staging pool export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
Added:
>
>
castorQuota.py Disk usage from you of lhcbt3, or of a certain directory python  ~rlambert/public/LBD/castorQuota.py
 

Revision 42010-11-15 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 19 to 19
 
  • The size is the user limit if it is your personal resource, or the full size if it is shared.
  • The shared resources maintain a fair use policy.
Changed:
<
<
  • The non-backed up spaces are volotile, do not use them for critical items.
>
>
  • The non-backed up spaces are volatile, do not use them for critical items.
 
  • The individual elements are described below.

Personal Resources:

Line: 28 to 28
 
    • All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space linked from there.
    • Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.
    • fs listquota to get the quota/usage, use separately in each scratch space
Added:
>
>
    • To obtain this increase contact Rob Lambert, who will forward your request
 
  • Castor Space:
    • All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.
Line: 64 to 65
 

More on the Castor disk pool:

Added:
>
>

-Description:

 The 30TB t3 castor disk pool is very special. It is a stager pool of disks for castor. Usually when you upload files to your castor working area, they are backed up on tape, and available on a staged disk for a short time. After which time the staged disk copy is deleted, and they need to be staged again. The lhcbt3 is a castor staging pool, files can be copied to there by being staged to there. In which case they will be backed up. Or, if they are created once and only on that space, they will not be backed up to tape.
Added:
>
>

-Access:

The access list of the t3 castor is maintained the same as that of the lhcbt3 itself. If you find yourself without access, you should get in touch with Rob Lambert or Thomas Ruf.

-How to use:

lhcbt3 is a Castor disk pool, to move things there you need a different service class.

 
Added:
>
>
Command Use Example
stager_qry Check if file is there stager_qry -S lhcbt3 -M <existing castor file>
stager_get Stage existing castor file into lhcbt3 stager_get -M <existing castor file> -S lhcbt3
stager_rm Remove file only from lhcbt3 stager_rm -M <existing castor file> -S lhcbt3
STAGE_SVCCLASS Set lhcbt3 as your default staging pool export STAGE_SVCCLASS="lhcbt3"
rfcp Copy new file to lhcbt3 rfcp <source> <destination> [after setting STAGE_SVCCLASS ]
 

Revision 32010-11-12 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space

Line: 10 to 10
 

Where to store what?

Where What? Size Backed-up Staging? path
Changed:
<
<
Afs home Directory Ganga job repository, critical code you're working on, all other small files 2GB Yes Disk ~
Afs scratch directory Ganga job workspace, temporary ntuples, noncritical code 3GB No Disk ~/scratch*
Group scratch Ganga job workspace, temporary ntuples, temporary DSTs 1TB No Disk /afs/cern.ch/lhcb/group/lbd
>
>
Afs home Directory Ganga job repository, critical code you're working on, all other small files 2GB Yes   ~
Afs scratch directory Ganga job workspace, temporary ntuples, noncritical code 3GB No   ~/scratch*
Group scratch Ganga job workspace, temporary ntuples, temporary DSTs 1TB No   /afs/cern.ch/lhcb/group/lbd
 
User castor backup tars, large ntuples to share, old ntuples ??TB Yes Tape /castor/cern.ch/user/<a>/<another>
Grid castor Ntuples from the grid, Selected DSTs 5TB Yes Disk /castor/cern.ch/user/grid/lhcb/user/<a>/<another>
Changed:
<
<
Group castor Output from lhcbt3 jobs, large numbers of ntuples and DSTs 30TB yes Disk /castor/cern.ch/user/<a>/<another>
>
>
Group castor Output from lhcbt3 jobs, large numbers of ntuples and DSTs 30TB Yes and No Disk /castor/cern.ch/<somepath>
 
  • The size is the user limit if it is your personal resource, or the full size if it is shared.
  • The shared resources maintain a fair use policy.
Line: 62 to 62
 
    • See the total usage here: lhcbt3-castor-monitoring, or use stager_qry -S lhcbt3 -s
    • To for each individual file you can tell if it is on lhcbt3 or another system by calling stager_qry -S lhcbt3 -M /castor/cern.ch/user/...
Added:
>
>

More on the Castor disk pool:

The 30TB t3 castor disk pool is very special. It is a stager pool of disks for castor. Usually when you upload files to your castor working area, they are backed up on tape, and available on a staged disk for a short time. After which time the staged disk copy is deleted, and they need to be staged again. The lhcbt3 is a castor staging pool, files can be copied to there by being staged to there. In which case they will be backed up. Or, if they are created once and only on that space, they will not be backed up to tape.

 

-- RobLambert - 11-Nov-2010

Revision 22010-11-12 - RobLambert

Line: 1 to 1
 
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space


Changed:
<
<
Old tape drives ;)New tape
>
>
Old tape drives ;) --> New tape
 

Where to store what?

Changed:
<
<
  • Critical code you're working on, critical small ntuples, documents, scripts : Afs home Directory
  • Ganga job repository : Afs home directory
  • Ganga job workspace : Afs scratch directory (preferrably your own, if not, the group space)
  • Old ntuples which don't need to be shared any more, old DSTs, used for publication which ened to be stored almost forever, backups of your documents : Your castor user area /castor/cern.ch/user/...
  • Ntuples and DSTs which come back from the GRID that you want to share with everyone : CERN-USER grid storage /castor/cern.ch/grid/lhcb/user/...
  • large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs : 1TB Afs group drive
  • large sets of output you do want to keep which are very large and needed by you and a couple of others : 30 TB Castor lhcbt3 disk
>
>
Where What? Size Backed-up Staging? path
Afs home Directory Ganga job repository, critical code you're working on, all other small files 2GB Yes Disk ~
Afs scratch directory Ganga job workspace, temporary ntuples, noncritical code 3GB No Disk ~/scratch*
Group scratch Ganga job workspace, temporary ntuples, temporary DSTs 1TB No Disk /afs/cern.ch/lhcb/group/lbd
User castor backup tars, large ntuples to share, old ntuples ??TB Yes Tape /castor/cern.ch/user/<a>/<another>
Grid castor Ntuples from the grid, Selected DSTs 5TB Yes Disk /castor/cern.ch/user/grid/lhcb/user/<a>/<another>
Group castor Output from lhcbt3 jobs, large numbers of ntuples and DSTs 30TB yes Disk /castor/cern.ch/user/<a>/<another>

  • The size is the user limit if it is your personal resource, or the full size if it is shared.
  • The shared resources maintain a fair use policy.
  • The non-backed up spaces are volotile, do not use them for critical items.
  • The individual elements are described below.
 

Personal Resources:

Changed:
<
<

Afs Home directories:

All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space linked from there.

Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.

Castor Space:

All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.

After a while files will be stored on tape, and take some time to stage.

>
>
  • Afs Home directories:
    • All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space linked from there.
    • Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.
    • fs listquota to get the quota/usage, use separately in each scratch space

  • Castor Space:
    • All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.
    • After a while files will be stored on tape, and take some time to stage.
 

Grid Resources:

Changed:
<
<
LHCb has a large CERN-USER disk, your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota

It is permanately staged to disk.

Use it to store large ntuples and DSTs that you got from the grid, which you want to share with others in the collaboration and which you don't want to dissapear back to tape.

>
>
  • CERN-USER:
    • LHCb has a large amoiunt of disk space shared between all collaborators at each site
    • The CERN-USER disk, is the element located at CERN (T0)
    • your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota
    • It is permanately staged to disk.
    • Use it to store large ntuples and DSTs that you got from the grid, which you want to share with others in the collaboration and which you don't want to dissapear back to tape.
 

Group Resources:

Changed:
<
<
The group disk resources are to be used if the above available resources are not appropriate.

Afs group disk:

We have a group disk on Afs, which is currently 1TB in size. It is not backed up, it is a volatile temporary storage disk.

Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.

Castor space:

We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.

The current size is 30TB.

This is to be used to share files locally, if there is some reason the files cannot sit on the Grid, and they need to be permanently staged.

>
>
The group disk resources are to be used only if the above available resources are not appropriate. It is maintained as a shared use policy, if you are using too high a proportion you will be asked to cut down.
 
Added:
>
>
  • Afs group disk:
    • /afs/cern.ch/lhcb/group/lbd
    • We have a group disk on Afs.
    • It is not backed up, it is a volatile temporary storage disk.
    • the current size is 1TB
    • Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.
    • Keep your items inside a directory with your name, anything outside a named directory will be deleted.

  • t3 Castor space:
    • We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.
    • It is technically tied to the lhcbt3
    • The current size is 30TB.
    • This is to be used to share files locally, if there is some reason the files cannot sit on the Grid, and they need to be permanently staged.
    • See the total usage here: lhcbt3-castor-monitoring, or use stager_qry -S lhcbt3 -s
    • To for each individual file you can tell if it is on lhcbt3 or another system by calling stager_qry -S lhcbt3 -M /castor/cern.ch/user/...
 

Revision 12010-11-11 - RobLambert

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="CernLbd"

CernLbdDiskSpace

This is the LHCb CERN LBD group page describing locally available disk space


Old tape drives ;)New tape

Where to store what?

  • Critical code you're working on, critical small ntuples, documents, scripts : Afs home Directory
  • Ganga job repository : Afs home directory
  • Ganga job workspace : Afs scratch directory (preferrably your own, if not, the group space)
  • Old ntuples which don't need to be shared any more, old DSTs, used for publication which ened to be stored almost forever, backups of your documents : Your castor user area /castor/cern.ch/user/...
  • Ntuples and DSTs which come back from the GRID that you want to share with everyone : CERN-USER grid storage /castor/cern.ch/grid/lhcb/user/...
  • large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs : 1TB Afs group drive
  • large sets of output you do want to keep which are very large and needed by you and a couple of others : 30 TB Castor lhcbt3 disk

Personal Resources:

Afs Home directories:

All Lbd members can get an expansion of their Afs home directory to 2GB and an additional 3GB scratch space linked from there.

Your Afs space is for code development, and storing documents, ntuples, grid jobs, that you want backed up, but are not massive files.

Castor Space:

All CERN users have a castor home directory, where you can store almost anything you like, put things here if they cannot fit on your Afs dir, and need it to be backed up for a long time.

After a while files will be stored on tape, and take some time to stage.

Grid Resources:

LHCb has a large CERN-USER disk, your quota is around 5TB, and you should clean it regularly following the instructions here: GridStorageQuota

It is permanately staged to disk.

Use it to store large ntuples and DSTs that you got from the grid, which you want to share with others in the collaboration and which you don't want to dissapear back to tape.

Group Resources:

The group disk resources are to be used if the above available resources are not appropriate.

Afs group disk:

We have a group disk on Afs, which is currently 1TB in size. It is not backed up, it is a volatile temporary storage disk.

Use for collect large sets of outputs, that you can regenerate quickly, collections of small files, from many different grid jobs without the need to continuously clean your home directory.

Castor space:

We have a group disk pool on Castor. Like the grid resource it is always on disk, not tape, however it does not count towards your Grid quota.

The current size is 30TB.

This is to be used to share files locally, if there is some reason the files cannot sit on the Grid, and they need to be permanently staged.


-- RobLambert - 11-Nov-2010

META FILEATTACHMENT attachment="old_tape.jpg" attr="" comment="Tape!" date="1289495809" name="old_tape.jpg" path="E:\rob\Lbd\old_tape.jpg" size="53106" stream="E:\rob\Lbd\old_tape.jpg" tmpFilename="/usr/tmp/CGItemp19125" user="rlambert" version="1"
META FILEATTACHMENT attachment="storagetek.jpg" attr="" comment="New tape" date="1289495925" name="storagetek.jpg" path="E:\rob\Lbd\storagetek.jpg" size="10572" stream="E:\rob\Lbd\storagetek.jpg" tmpFilename="/usr/tmp/CGItemp19145" user="rlambert" version="1"
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback