This twiki page contains obsolete historical information, derived from the two pages prepared in 2011-2012 to describe the Coverity setup used at that time in LHCb.

A description of the more recent setup used in LHCb in 2016 can be found in the RunningCoverity twiki page.

-- AndreaValassi - 2016-02-18

1. Install Coverity Integrity Manager

This section of this TWiki page covers in details the steps needed to install and set up an instance of the Coverity Integrity Manager (CIM).

CIM is the database and web interface to the defects that Coverity finds in the code.


The installation and upgrade steps are well described in the "install guide" document available from the Coverity download site, so I try to cover only what is LHCb-specific and some configuration corners that may no be obvious. For the upgrade, its extremely important to do as suggested in the guide: test the upgrade procedure.

All the steps should be done as lhcbsoft on the machine (alias pointing to lxbuild161 at the time of writing). It's also recommended to start script before: it starts a shell which will log everything on the screen.

Following the instructions on the manual, start the install shell script

After accepting the license, it asks where to install CIM: chose /build/coverity/coverity-integrity-manager for the final installation and a different directory for the test of the upgrade (e.g. /build/coverity/cim-test).

Do as suggested: use the included PostgreSQL database and the default location for the data. It's OK to use the suggested database port (5432), unless you are testing the upgrade on the same machine, in which case I suggest to use (temporarily) port+1 (5433). Set the password for the admin user to the same of lhcbsoft (for simplicity).

The proposed ports for the web services are good enough, unless you are testing the upgrade on the same host, where I suggest the port+1 rule already used for the PostgreSQL port.

At this point it is better to set the variable cov_inst to the installation directory (it's not needed by Coverity, but it makes it easier to copy and paste the following commands without risking to use the wrong installation)

that's the bash version, adapt to your shell and to your installation.

Configuration from the shell


By default, at this point CIM is started. Since we need to configure a few things in the server and a restart is anyway needed, it's better to stop it now:
$cov_inst/bin/cov-im-ctl stop


Go to the directory $cov_inst/server/coverity-tomcat:
cd $cov_inst/server/coverity-tomcat
and open the file conf/server.xml for editing
vim conf/server.xml
Look for the tag Connector with SSLEnabled set to true. It should be commented out, so remove the comment tags () around it. For an upgrade test, change the port number following the +1 rule.

Save the file and open webapps/ROOT/WEB-INF/web.xml:

vim webapps/ROOT/WEB-INF/web.xml
Towards the end of the file, just before the closing </web-app> tag, add the following lines:


JRE CA Certs

Note: with the latest configuration of LDAP, this step might not be needed.

This may sound strange, but we need to update the the cacerts file in the JRE (Java Runtime Environment) installed by CIM to make it able to connect to the LDAP server at CERN.

Go to $cov_inst/jre/lib/security

cd $cov_inst/jre/lib/security

We need to get the certificate for LDAP with the openssl command and extract the section of the output between the BEGIN and END CERTIFICATE armors

echo | openssl s_client -connect | awk '/BEGIN CERT/{c=1} (c){print} /END CERT/{c=0}' > cerndc13.pem
(for the curious: the first command, echo, is needed because "openssl s_client" expects some input; the awk script set the flag c to "true" when it sees BEGIN CERT, prints the line if c is "true", then sets it to "false" on the line END CERT).

Add the downloaded certificate to cacerts (note that by default cacerts is read-only):

chmod u+w cacerts
../../bin/keytool -import -keystore cacerts -storepass changeit -noprompt -file cerndc13.pem -alias
chmod a-w cacerts
The keytool command line above will add the retrieved certificate as a trusted one in cacerts.


Open the configuration file of PostrgreSQL
vim $cov_inst/database/postgresql.conf

According to the install manual, set

shared_buffers 256MB
work_mem 48MB
maintenance_work_mem 512MB
wal_buffers 8 MB
checkpoint_segments 32
effective_cache_size 1GB (suggested MB)


Now you can re-start the server:
$cov_inst/bin/cov-im-ctl start

Configuration from the web

Connect to the web interface: (replace the host and port with the correct ones if you are testing). Log in as admin with the password you specified during the installation phase (the lhcbsoft one).

Click on the link Administration at the top right of the page.

Sign In Settings


Check the items:

  • Enable LDAP authentication
  • Disable local password authentication
  • Create LDAP users automatically on sign in
un-check the others and apply.

LDAP Configuration


Examples and help on how to configure LDAP can be found on UserGroups.

Connection Settings

Host Name
Port 389
Base DN DC=cern,DC=ch
Use secure connection (SSL) un-checked
Use anonymous bind un-checked
Bind DN CERN\lhcbsoft
Bind Password lhcbsoft password

User Search Settings

Click on "Pre-Fill Microsoft Active Directory Settings", then set
User Search Base DN OU=Users,OU=Organic Units

Group Search Settings

Group Search Base DN OU=e-groups,OU=Workgroups
Group ObjectClass group
Group Name Attribute name
Member Attribute member
Additional group filter (&(objectClass=group)(cn=lhcb-svn-writers))

Email Configuration


Check the box "Allow Integrity Manager to send email".

Host Name localhost
Port check "Use default port"
From Address


Users cannot connect (2011-10-12)

There has been a problem recently with LDAP users not able to connect.

The solution has been to change the server from to in the LDAP configuration, including the update of the JRE certificates and a restart of the server with cov-im-ctl.

Users cannot connect (2012-03-21)

Another problem in LDAP settings.

The new configuration does not use SSL and does not use a fixed machine (it used the alias, so the update of the JRE certificates is not needed anymore.

-- MarcoClemencic - 13-Oct-2011

2. Running Coverity

Coverity is a code sanity checker. The installation and basic configuration is handled in the InstallCoverityIntegrityManager section above.

Running Coverity

Since Coverity is a commercial tool and can only be run on specific (licensed) machines, i.e. or It is run every odd day.

  • to generate its static code analysis information Coverity processes are run in serveral steps. First wrapping a make
 MyCoverityPath/cov-build --dir MyCoverityDir/INT make -j 20 -l 16
to build the code internal relationships etc.
  • to avoid analyzing code repeatedly, Coverity can build models of already analyzed code, i.e. dump the information for already analyzed code fragments in =xmldb=-files.
  • the derived models are added to the analysis of the current project:
    MyCoverityPath/cov-analyze --dir MyCoverityDir/INT -j 4 --enable-callgraph-metrics --enable-parse-warnings --all MyDerivedModelsList 
    MyCoverityPath/scov-collect-models --dir MyCoverityDir/INT -of MyDerivedModelsDir/MyProjektName.xmldb
    • here, Coverity is run in 4 threads, each with its arguments from MyCoverityDir/INT/c/output/commit-args.txt, e.g. /build/LHCb/nightlies/lhcb-coverity/x86_64-slc5-gcc43-opt/GAUDI/INT/c/output3/... etc.

  • the actual database of defects (see webpage) is build with
    cov-commit-defects --host --port 8080 --user admin --stream MyProjectName MyDependingProjects MyArguments

Sanity Checks

  • compare the number of checked versus actual code files, e.g.
    • tail of
    • find /build/LHCb/nightlies/lhcb-coverity/x86_64-slc5-gcc43-opt/GAUDI/ -name \*.cpp | wc

Good to know

  • it was observed, that the code analysis is pretty read/write intensive
    • while it is no problem when the intermediate analysis directory MyCoverityDir/INT is on a physical disk
    • it showed to be a problem when using a raid containing the intermediate directory MyCoverityDir/INT, slowing the whole analysis down by a factor of 4-5
    • therefore the intermediate analysis directory was moved into the RAM-disk on /dev/shm

-- ThomasHartmann - 06-May-2011

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2016-02-18 - AndreaValassi
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback