Steps for Setting Up A New Repository

Steps needed in Advance.

  • Ask the repository requester.
    • Initial size and #files as well as the growth rate of repository.

Create an EGroup

Create an e-group called LxCvmfs-NewRepoName based on one of the other similarly named ones. Some fields are located in other tabs.
  • Name: LxCvmfs-NewRepoName
  • Topic: Service Management
  • Usage: Security/Mailing
  • Description: Writers of /cvmfs/
  • Self-Subscription Policy: CERN Users, With owner/admin approval
  • Privacy Policy: e-group Members

Set the Administrator e-group as LxCvmfs-NewRepoName

Add the people who request a new repository to the e-group you just created.

Send them an e-mail in order to notify the users about the egroup creation:



is approved. I have added you to an e-group I have just created called


The e-group serves two purposes:

1. Everyone in the e-group will have write access to

2. I will use this e-group to contact you in case of
   intervention or problem.

Please ask your colleagues to join the e-group. I have set it up
such that everyone in the e-group is able to approve a new member.

Currently I am in e-group but I will remove myself once things
are set up.

I'll get back to you when you can login and write something. I would
hope sometime next week.

Install a node if need be.

If a new node is going to be used, the it-cvmfs git repository contains a script cephcvmfs/ . Read it and use it to create a machine. It will mount no volumes when it mounts. Puppet will stop and get no further because it has no volume to mount. That's fine we are going to fix that now. By default a tiny machine is created. Make it bigger for bigger users.

NOTE: if you want to use the local fast SSD to accelerate ZFS, then be sure to create the VM as m2.2xlarge and with the --nogrow option.

Create a Volume

Create a volume if necessary. Requires commands run on aiadmin hosts. You will need to switch to the "IT CVMFS" environment for the script to take effect.

eval $(ai-rc "IT CVMFS")

git clone
cd it-cvmfs/cephcvmfs

It contains a short helper script to create a volume for a repository. Read the script and THEN create a volume. There may be an error regarding the type of volume created.

There is also a to create the backup volume. The users will ask if they want a backup volume.

Add YAML Configuration to a Machine.

The configuration for the cvmfs-stratum-0 is stored in it-puppet-hostgroup-cvmfs/data/fqdns/ directory on a per node basis. A typical addition to a node looks like the following but typically: copy and paste another one. Create a new YAML file for new host or add additional entries to an existing host.

All the "NewRepoName" fields should be entered with lowercase letters. The "XXX" uid number should be unique and greater than the others. Search the project and put the greater number+1.

  - lxcvmfs-<repo>

     user: cvNewRepoName
     uid: XXX
     repo_store: /var/spool/cvmfs/

     account: cvNewRepoName
     egroup: lxcvmfs-NewRepoName
     blockdevice: ToBeFilledLater
     fstype: zfs
     fsoptions: 'rw,noatime,_netdev'

Don't push the merge request yet, you will need to attach the volumes to the target machines lxcvmfsXX, and optionally backup-cvmfsXX.

The "ToBeFilledLater" field is filled after you attach the volume to the host.

[root@lxcvmfsXX ~]# ls -l /dev/disk/by-id
Find the one without the following 2 parts (part1/part9) and the latest date. Copy the part after the "virtio-"

For the UID and account verify that the UID is not used on any other CVMFS repository and is not part of CERN LDAP.

[user@aiadmXX ~]$ phonebook -uid 303 [user@aiadmXX ~]$ phonebook -login cv

should both be blank and grep the other UIDs in fqdn directory.

It's possible to add multiple repositories to the same host. Just have an array of entries in the YAML.

Add a DNS entry

cvmfs-NewRepoName should point to the host. Get the landb alias of the machine and append the new one if others exist. Don't copy the apostrophes! Set will replace the existing aliases, so IF YOU omit a previous alias, problems will arise!
openstack server show Hostname | grep landb
openstack server set --property landb-alias=PreviousAliases,cvmfs-NewRepoName Hostname

Kernel Module Mayhem

Puppet and distrosync will pull in a aufs kernel. This needs to be running so reboot once it is installed. Until this is done zfs cannot be installed.

Format Volume

The script


does everything but read it first to see what is going on. In particular it will exit early if you requested a backup and the backup destination is not yet in place.

The ZFS filesystem needs to be created first on the backup machine, run the commands written in the above script.

Add ZFS log and cache devices (Optional)

If this machine has a local fast SSD then we can use this to accelerate ZFS. Here is the rough procedure, to be cleaned out:

1. Create m2.2xlarge with --nogrow
2. Partition vda for ZIL and L2ARC:

parted /dev/vda mkpart primary 20GB 25GB
parted /dev/vda mkpart primary 30GB 70GB
growpart /dev/vda 2

Now reboot.

pvresize /dev/vda2
lvextend -l +100%FREE /dev/mapper/VolGroup00-LogVol00
resize2fs /dev/mapper/VolGroup00-LogVol00

Local home directory

The local cvNewRepoName user will have a home dir in /home/cvNewRepoName. The dir should be a softlink into zfs, but puppet will probably fail to run like this:

Notice: /Stage[main]/Hg_cvmfs::Lx/Hg_cvmfs::Private::Localzero[]/File[/home/cvintelsw]: Not removing directory; use 'force' to override
Notice: /Stage[main]/Hg_cvmfs::Lx/Hg_cvmfs::Private::Localzero[]/File[/home/cvintelsw]: Not removing directory; use 'force' to override
Error: Could not remove existing file
Error: /Stage[main]/Hg_cvmfs::Lx/Hg_cvmfs::Private::Localzero[]/File[/home/cvintelsw]/ensure: change from directory to link failed: Could not remove existing file

To fix this just rm -rf /home/cvNewRepoName then run puppet again.


This is only for a new repository.

Go to aiadm to get the keys generated by puppet and use them with the secret sharing system "teigi"

export REPO=(NewRepoName1 NewRepoName2)
for I in ${REPO[*]}
    scp root@$LX:/var/lib/puppet/ssl/private_keys/$LX.pem $
    scp root@$LX:/var/lib/puppet/ssl/certs/$LX.pem $
    tbag set --hg cvmfs/lx --file $ $
    tbag set --hg cvmfs/lx --file $ $

If migrating from an old repository upload the existing keypair from the old node /etc/cvmfs/keys/NewRepoName.key and .crt.

Running puppet again will copy the keys on the node into the correct location for CvmFS.

It is now necessary to sign the cvmfswhitelist file. Get the finger print of the certificate and add it to the it-cvmfs git repository.

openssl x509 -in /etc/cvmfs/keys/<repo> -fingerprint -noout
SHA1 Fingerprint=2D:D7:B7:80:9E:E2:F7:06:1B:C7:2C:54:BB:91:82:74:0A:A8:52:F4

And in the git repository

cd signing
mkdir <repo>
cat <<EOF > <repo><repo>
git add <repo>
git commit -m 'Add <repo>'
git push

Finally do a normal cvmfswhitelist publication. This requires the smart card or yubikey signing procedure. Do not skip this step.

Bootstrap a new Repository if Necessary.

3 options here:

A new repository
A script will have been created as


Using a repository
A repository residing in an existing CEPH volume there is nothing to do.
Migrating a repository
There are some scripts in /etc/puppet-cvmfs-scripts.

Add the repository to the Stratum 0.5 and Stratum 1.

Two files in the cvmfs hostgroup.

  • zero.pp Is the stratum 0.5, which is an apache proxypass to the real stratum 0's.
  • one/backend.pp is the stratum 1 backend, big physical zfs server hosting the stratum 1 data.
  • one/frontend.pp is the stratum 1 squids.

guess what to add.

Quick Test

Login node as root ,

su - cv<repo>
cvmfs_server transaction <repo>
echo "TestFile, please delete me" > /cvmfs/<repo>
cvmfs_server publish <repo>

Wait 1 hour at most the file should be visible on lxplus.

Everything Done , Inform the User

The repo /cvmfs/<repo> is now ready. Members of the e-group
should be able to login in with ssh to cvmfs-<repo>

You are sharing this node with currently with /cvmfs/ and /cvmfs/ If
required we can  split this resource as required.

To publish files once logged in you must sudo to a shared account cv<repo> from where you can
then start a transaction and then publish it. You will find brief instructions in /etc/motd on the box as
well as a link to more precise documentation.

Any imediate problems then of course reply to me. After a time it's best to submit
support requests to the snow.

Your file system is visible on lxplus and batch already

$ ls /cvmfs/

all being well we ask some other stratum 1s to replicate your file system shortly.

-- SteveTraylen - 18-Oct-2011

Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r33 - 2018-07-23 - EnricoBocchi
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CvmFS All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback