Grid Map dir mechanism

This page is supposed to give a quick insight into the grid map dir mechanism currently used in the gLite middleware by all EGEE adhering sites.

The gridmap dir mechanism allows the various LCMAPS plugins (VOMS and not) used by the Computing Element service to map any given DN/FQAN to a local (pool) account.

This mechanism is based on a special directory (gridmap dir) that contains hardlinks to special files (editable only by root) corresponding a one-to-one to local pool accounts defined in the CE/WNs. This is a folded snapshot of the content of the gridmapdit in one of the CERN CEs (actually in this case exposed to all CEs via NFS).

[...]2467411 -rw-r--r--  2 root root 0 Sep  5 11:33
2467411 -rw-r--r--  2 root root 0 Sep  5 11:33 lhcb061
2467886 -rw-r--r--  2 root root 0 Sep  5 11:18
%2fdc%3dch%2fdc%3dcern%2fou%3dorganic%20units%2fou%3dusers%2fcn%3d kumarv%2fcn%3d678168%2fcn%3dvineet%20kumar:zh
2467886 -rw-r--r--  2 root root 0 Sep  5 11:18 cms128
2467675 -rw-r--r--  2 root root 0 Sep  5 10:54
2467675 -rw-r--r--  2 root root 0 Sep  5 10:54 atlas038
2467029 -rw-r--r--  2 root root 0 Sep  5 10:22
%2fc%3dit%2fo%3dinfn%2fou%3dpersonal%20certificate%2fl%3dmilano%2fcn%3d davide%20rebatto:zp:zp<br />2467029 -rw-r--r--  2 root root 0 Sep  5 10:22 atlprd09

As soon as a new DN arrives in that site, his DN is encoded and a check is done against all entries in the gridmap dir directory. If not match found a new hardlink is created to the first available pool account file. In the example above the inode number corresponding to the encoded DN entry maps unequivocally to a local pool account. In case of match on the other hands the pool account already assigned in the past is re-used instead. This same mechanism has been generalized in case more families (groups) of pool accounts are defined for a given VO. The group of pool account used is defined by the VOMS role the user comes with.

The reader might argue: what if all pool accounts are taken? There is a cronjob on the CE that systematically goes through this directory for all groups of pool accounts defined and checks whether at least the 20% of them is free. In case (and this happens so often for default pool accounts) they run out of this threshold the cronjob selects the pool account to release by taking into account the date of the last activity/assignation.

In any case it does not remove accounts newer than 10 days. It might even be that also with 200 pool accounts available a VO (ATLAS or CMS most likely) can run out of available mapping possibilities but this is far to be probable for LHCb where we do not even have 200 different users. YAIM at the moment comes with default very confrotable in temrs of main pool accounts. Of course this becomes impossible for LHCb in case of pool accounts for sgm users. In VOMS we have at most 10 users elilgible to be mapped to local sofwtare managers. In case no pool accounts can be released the job submission fail with some globus error (like the globus error 3)

Further reading (about YAIM and SGM/PRD pool accounts might be found at here

-- RobertoSantinel - 05 Sep 2008

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2011-06-22 - AndresAeschlimann
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback