cvmfs 2.1

Hints on updating: Please do it only on a small number of nodes and monitor it. Grid sites should monitor for "hot nodes" - for example, in Panda, you can look at to see failures by nodes (substitute your site name). To revert, simply reinstall the old production version.

This is the client installation documentation for cvmfs in ATLAS; please also refer to the starting point ATLAS cvmfs TWiki and additional relevant site and usage information. Installation needs to be done on each machine that mounts the cvmfs volume (except as noted otherwise in the appropriate section - eg for nfs exports.)

In addition to the installation of cvmfs described here, you will need to also install required software for ATLAS; these can be found here:

or you you can run this script after cvmfs installation /cvmfs/ to show what is missing (it can be run as normal user).

Machine configuration

  • Create a partition /<path>/cache/cvmfs2 TIP If you use /var/lib/cvmfs then you do not need to define CVMFS_CACHE_BASE in /etc/cvmfs/default.local
  • it can be anywhere but avoid the root (/) partition.
  • space of 25-50 GB is recommend for the local caching of cvmfs <some local disk, not nfs, partition's directory>
    • (Note the partition above; in the Configure cvmfs section below, in /etc/cvmfs/default.local, "CVMFS_CACHE_BASE" should point to it.)
    • (Note the size of the partition above in MB; in the Configure cvmfs section below, in /etc/cvmfs/default.local,"CVMFS_QUOTA_LIMIT " should be equal 90% of the partition size (in MB) , due to internal db files.

Install cvmfs

Note US OSG sites, please use these instructions instead: Open Science Grid CVMFS install instructions

Fetch and install the repo information (only needed first time):

yum install

Install or update cvmfs:

(Note: If you are updating, do "yum update" instead of "yum install" below).

# for Tier1 and 2 only: 
yum install cvmfs cvmfs-config-default 

# for Tier3s and others:
yum install cvmfs cvmfs-config-default cvmfs-auto-setup

(  or if you prefer to install the versions that were tested on a Tier3 (tested on 27 Mar 2019) do instead:
# for SL6  (SL5 users should migrate to SL6)
yum install cvmfs-2.6.0-1.el6 cvmfs-config-default-1.7-1 cvmfs-auto-setup-1.5-2

# for SL7 
yum install cvmfs-2.6.0-1.el7 cvmfs-config-default-1.7-1 cvmfs-auto-setup-1.5-2

Warning, important You are advised not to have yum update cvmfs automatically. See this post in the atlas-adc-tier3-managers eGroup on how to do this.

Warning, important For Tier1/2s, note that the cvmfs-auto-setup rpm was intended for Tier3s (please install it if you are a Tier3 !) and will configure settings automatically. It does

  • linking the cvmfs map in /etc/auto.master
  • adding user_allow_other to /etc/fuse.conf
  • starting cvmfs
You will have to manually configure these yourself if you do not install cvmfs-auto-setup.

Be informed !

  • Please subscribe to atlas-adc-tier3-managers if you are a Tier3 manager and we will advice you on when to update.
  • You may also want to subscribe to cvmfs-talk.
  • You should subscribe to atlas-adc-cvmfs which is focused on cvmfs in ATLAS.
  • If you are testing nightlies or release candidates, please subscribe to cvmfs-testing for announcements etc.

Configure cvmfs

Note that the default CVMFS_CACHE_BASE location has changed.

Edit (create if it does not exist) the file /etc/cvmfs/default.local, in particular, change

  • CVMFS_CACHE_BASE=/<path>/cache/cvmfs2 HELP Not needed if you use var/lib/cvmfs
  • CVMFS_QUOTA_LIMIT=<should be equal 90% of the partition size (in MB)>

HELP This is needed for SL6 if the kernel is older than 2.6.32-358.11.1.el6.x86_64 but it does not make cvmfs writeable ! I recommend you update the kernel and skip this setting.


If you have your own squid server (which you should), then include that value; ie do (don't forget the quotation marks and the semicolons separating the fields if there is more than one squid):

  • CVMFS_HTTP_PROXY="< address of the squid server>"
Note the syntax:
  • CVMFS_HTTP_PROXY="localsquid1|localsquid2;remotesquid3" This "load-balancing" of local squids allows for removing them consecutively for maintenance tasks. (recommended) Note that as of version 2.1.20, if you have a DNS round robin, you can define the DNS alias instead of defining each proxy. The alias will then be automatically resolved into a load balancing group by cvmfs.
  • CVMFS_HTTP_PROXY="squid1;squid2" This will use squid2 if squid1 becomes unavailable.

If you don't have a squid server:


At CERN, check configuration instructions here: ClientSetupCERN

A list of Stratum-1 servers is available and they will be selected based on GeoIP from version 2.1.20.

Start services

Note that there are no system services for cvmfs in version 2.1. Automount will mount it when the dir is accessed.

  • Mare sure autofs is started: service autofs start

  • Make sure cvmfs is a member of group fuse (fuse should be installed as part of the cvmfs installation if it's not already there. If cvmfs is not part of group fuse, or group fuse doesn't exist in /etc/groups, you will have to add it or edit it)

  • Check the setup scripts: cvmfs_config chksetup

  • Some quick tests:
    • cvmfs_config probe
    • cvmfs_config status
    • cvmfs_config stat -v

Note that init scripts have been removed; you will need to use cvmfs_config to reload etc.

checking cvmfs

* Tools for debugging:

  • cvmfs_config stat : (can be run as normal user) shows cvmfs usage information (very useful !)

  • cvmfs_config chksetup : check the configurations

  • cvmfs_config showconfig : show the configurations

  • cvmfs_config probe : check that the cvmfs mount point is working

  • cvmfs_talk : query the cvmfs process running on a given node. Changed in 2.1: not cvmfs-talk
    • You will first need to do ls /cvmfs/ or cvmfs_config probe
    • type cvmfs_talk to see the help; for example, you can type
      • cvmfs_talk cache list to see what caches are on your local disk
      • cvmfs_talk cleanup 1000 will remove the oldest files until the used space is below 1GB.

Site/Local squid server

If you have many computers accessing cvmfs, it will be beneficial to run your own squid server. Instructions to do this are the same as that for the Frontier-squid server (You can use the same squid to serve both cvmfs and Frontier). The instructions for setting a squid proxy cache can be found here: Squid rpm installation instructions. When you have one setup, please let us know so that we can add it to the database.

Machines which have limited space for cache

Although 25-50 GB of space is highly recommended, you can set a limit on how much cache disk is used if you have much less space. In /etc/cvmfs/default.local, set the variable CVMFS_QUOTA_LIMIT=<Soft Limit in MB>. Warning, important you must set the value no matter how much space you have. Remember to set it to only 90% of what you really want to allocate (the other 10% will be taken up by hidden sql-lite files).

NFS export of cvmfs

This is something that should be done only as a very last resort for a site. It introduces a single point of failure and, also, other benefits of cvmfs, such as a local file access instead of network file access, are lost.

Install cvmfs on nfs server

For this situation, you will need to
  • have good networking
  • have ample local cache space for cvmfs
  • run RHEL6 variant OS (ie SL6). Note this will be nfs4.
  • you are also recommended to install a local squid on that machine and allow access to it.

Install and Configure

In /etc/cvmfs/default.local, also add


In /etc/fuse.conf, add


Note: you may see this warning - it can be safely ignored for now. (The message will be fixed in a future version of cvmfs.)

cvmfs_config chksetup
Warning: CernVM-FS map is not referenced from autofs master map

Hard mount cvmfs and nfs export

On the nfs server where cvmfs was installed:
# make mount points :
mkdir -p /cvmfs/
mkdir -p /cvmfs/
mkdir -p /cvmfs/
mkdir -p /cvmfs/

# in /etc/fstab, have these entries:
atlas         /cvmfs/   cvmfs   defaults   0 0
atlas-nightlies      /cvmfs/   cvmfs   defaults   0 0
atlas-condb      /cvmfs/   cvmfs   defaults   0 0
sft         /cvmfs/   cvmfs   defaults   0 0

# mount 
mount --all
# df -h should show you the mounted cvmfs repositories and you can then redo the quick tests described previous section.

# in /etc/exports  (replace <netmask> with your own)
/cvmfs/ <netmask>(ro,sync,no_root_squash,no_subtree_check,fsid=101)
/cvmfs/ <netmask>(ro,sync,no_root_squash,no_subtree_check,fsid=102)
/cvmfs/ <netmask>(ro,sync,no_root_squash,no_subtree_check,fsid=103)
/cvmfs/ <netmask>(ro,sync,no_root_squash,no_subtree_check,fsid=104)

# modify the file /etc/sysconfig/nfs:
 diff nfs nfs.old

# start nfs
service nfs start
chkconfig nfs on

# fix iptables - change the <netmask> below to your netmask !
iptables -I INPUT -m state --state NEW -p tcp     -m multiport --dport 111,892,2049,32803 -s <netmask> -j ACCEPT
iptables -I INPUT -m state --state NEW -p udp     -m multiport --dport 111,892,2049,32769 -s <netmask> -j ACCEPT
service iptables save

On Worker nodes

Do not install cvmfs.

You should be able to see the nfs exports from your server: showmount -e <your nfs server hostname>

# make mount points:
mkdir -p /cvmfs/
mkdir -p /cvmfs/
mkdir -p /cvmfs/
mkdir -p /cvmfs/

# add to /etc/fstab; replace <nfs server> with your nfs server hostname:
<nfs server>:/cvmfs/ /cvmfs/ nfs noatime,ac,actimeo=60 0 0
<nfs server>:/cvmfs/ /cvmfs/ nfs noatime,ac,actimeo=60 0 0
<nfs server>:/cvmfs/ /cvmfs/ nfs noatime,ac,actimeo=60 0 0
<nfs server>:/cvmfs/ /cvmfs/ nfs noatime,ac,actimeo=60 0 0

mount -all
# ls of the mount points should show identical listings to that when done on the nfs server.


Please see the cvmfs debugging page if you encounter problems.

Mac OSX client installation and configuration

Tested on MAC OSX 10.9

Install software

First you need "FUSE for OS X" - get the latest release from and install it.

Then download the latest CVMFS client image from and install it.

Configure cvmfs

Open interactive session and edit few files as su (sudo) as described here:
  • Configure Cvmfs
  • The main changes are in /etc/cvmfs/default.local; here is an example of the file contents:,,,

Mount cvmfs

(do as su - sudo)

To mount, we recommend to copy script (and make sure it is executable) in your /Users/<USERNAME>/private

You can now execute the script and it will mount the needed cvmfs repositories.

If you want to have /cvmfs mounted after each reboot of your machine you can copy cvmfs.filesystems.plist under /Library/LaunchDaemons and change inside <USERNAME> to your <USERNAME> and then make sure the location is correct and file is there in /Users/<USERNAME>/private/ . ALERT! Caution: this may cause login problems on MacOS with FileVault enabled. If you are unable to login, boot into recovery mode and remove cvmfs.filesystems.plist .

CERN VM / cvmfs official website

Major updates:
-- AsokaDeSilva - 12-Mar-2013

Responsible: AsokaDeSilva
Last reviewed by: Never reviewed

Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatplist cvmfs.filesystems.plist r1 manage 0.5 K 2014-01-29 - 15:20 EmilObreshkov  
Unix shell scriptsh r3 r2 r1 manage 0.5 K 2014-05-09 - 16:48 EmilObreshkov  
Edit | Attach | Watch | Print version | History: r56 < r55 < r54 < r53 < r52 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r56 - 2019-03-27 - AsokaDeSilva
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback