A new page https://lhcb-shifters.web.cern.ch/ has been created with additional information for shifters, please also follow instructions there. This twiki page is not being maintained anymore

Grid Shifter Guide : Being updated autumn 2010

This topic is under development during Autumn 2010. It is experimental. Please contact Pete Clarke (clarke@cernNOSPAMPLEASE.ch) for complaints or suggestions.




This document is for LHCb Grid computing shifters. It is organised as follows.

*Doing Shifts:* The main body of this document describes the principal activities of a production shifter under . This includes many links to web pages and to monitoring plots which may help you in the shift.

*Compendium of Examples:* We (try to) give a compendium of problems and processes you may encounter. This is written in a way to easily allow shifters and other experts to edit and add their own useful examples in a simple factorised way (i.e. you don't have to add text in the main body of this document).

*Preliminaries:* We provide links to information which might be useful, but which you probably only need to read once or twice until you are familiar with it. For this reaosn it comes at the end.


Doing Shifts


This section contains suggested activities for:

  • The start and end of shift
  • The principal activities during a shift
  • Links to many web pages which you may find useful.
  • A suggested shift checklist
  • A compendium of possible problems and possible actions to take.
  • Instructions and a template for the report you are strongly requested to lodge in the ELOG at the end of each shift.

Start and End of of Shift

At the start of a shift.

  • Open your favourite links Many shifters will have their own "favourite" set of links they open at the start of a shift. Most of those you might want will be linked below. You will develop your own over time.

At the end of a shift you should

  • Enter a short summary in the ELOG. This is important as it provides an interface to the next shifter, and allows others to get a summary. This might contain:
    • Your name
    • Status of any ongoing data reconstruction or reprocessing productions (including EXPRESS, FULL,...)
    • Status and progress of problems you inherited at beginning of shift (i.e.e resolution, or still ongoing)
    • Summary of any new problems in your shift (there will likely be separate Elog entries for the details)
    • ..anything else relevant...
    • It is important that you only tick the shift report box. This will mean that the report automatically goes to an LHCb summary page

  • Return the key to the operations room to the secretariat if appropriate.

and finally - please log out of the terminal stations in the operations room so that you don't block them for the next person

Principal Activities During a Shift

The principle things you need to keep an eye on during the shift are listed below. In many cases the "activities" are not orthogonal, i.e. may be different ways to view the same thing.

  • Find out what has been happening in the last few days: If you havn't been on shift for a while its probably a good idea to get a quick picture of the fills and associated data taking runs of the last few days. This will help understand what to expect in the reconstructions productions. How to do this

  • Find out what the current reconstruction productions are if you are not already familiar with them. To do this look at production request page in the Dirac portal. If you look down it you will set those that are associated with Reconstruction and are Active (if in doubt it might help to filter on these using the filter arrow at top left). You are typically looking for those which are Recox-Strippingy or some variation of this (there may be validation versions which are under test which you can ignore). How to do this

  • Read the ELOG for the last few days. You should read the end of shift report from the last shifter. You will also be able to pick up threads pertaining to current issues.

  • Live Data flow: This means keeping abreast of the data flow from the pit to it being picked up by the current reconstruction production How to do this.

  • Current Reconstructions: Ensure the current reconstruction productions are progressing How to do this.

  • General Job success rate for production jobs: The shifter should monitor the overall job success rate for all production jobs. How to do this.

* Progress of productions reaching the book keeping: The shifter should look at the tables made by Philippe. How to do this.

  • General job success rate for user jobs: The shifter should look at the progress of user jobs How to do this

  • Monte Carlo production: t.b.d

  • Site Centric View: The shifter should take a site centric view of job success and SAM test results How to do this

  • Daily Operations Meeting: Attend this at 11.15 CET.

  • Elog entries: Make entries when there are new problems/observations or when there are developments to an existing problem. Making Elog entires

  • Escalate a problem: If a problem is discovered experienced shifters may recognise the context and either know how to fix it, or escalate it. New and inexperienced shifters may not easily be able to know the next step. The general procedure is Investigate : Go as far as you can using the monitoring plots and the examples below. Consult the GEOC : When you reach an impasse, or in doubt, please consult the GEOC first. Please resit the temptation to just interrupt the Production Manager (or anyone else) who may be sitting behind you. By escalating to the GEOC you will (i) be more likely to learn about a known problem and (ii) will aid continuity of knowledge the problem you have found. The GEOC will escalate to the Production Manager or others as necessary.

How to: See what has happened in last few days

Look at the RUNDB to see fills in last days. By clicking on the fils you will get a list of runs associated with each. It might be helpful to note down which runs start and end each recent fill and the luminosity. This may be useful when using the DIRAC portal later. Check that all runs destined for offline are in the BKK.

Howto: Look at live data flow

This means keeping abreast of the data flow from the pit to it being picked up by the current reconstruction production. Each live data taking run results in RAW DATA files produced in the pit being transferred OFFLINE and into the book-keeping (BKK). Once in the BKK these files are automatically found by the current reconstruction production and processed This should rarely fail, but as a first task the shifter can look and verify the integrity of the chain.

When we are in proper data taking you will see the LHCb online page showing COLLISION10 with data destination OFFLINE. If the destination is LOCAL then you can ignore it.

Howto: Look at the current reconstruction production.

In general, if we are data taking, or are in a re-processing period, the latest data reconstruction production will be running. You will know what the current active data reconstructions are from the earlier step at the start of shift. These have 3 visible steps (i) reconstruction and stripping (ii) merging (iii) replication to sites. The shifter must keep an eye on these to ensure they are progressing, and they there are not an unexpected number of failing jobs which are the result of a recent (as yet unknown) problem.

Questions to be answered now are:

  • Is the RAW data for each run being picked up by the current data reconstruction production ?
  • Is the merging going properly ? When each run is 100% reconstructed and stripped, the stripped data should be picked up by the merging production to produce DSTs . There is some delay here, typically merging may not yet be running on the latest runs form the very latest fill yet.
  • When merged the DST data should appear in the BKK.

This link picks out the active productions from the production monitoring page:

This link is supposed to pick out the active reconstruction and merging productions Rob fixed it, Pete owes him one beer.

This link takes you to the book-keeping where (after many clicks)you can see if DSTs are there.

The following table shows the current active production 'Activities' with links that will show each:

Activity Express Full Stripping Merging Link
Reproc: Reco12 - after Aug TS   12908 - - link
Reproc: Reco12 - before Aug-TS   12503,504,518,519,601,628,674,701,714,727 - - link
Reco11a-S16 MD   12527 12528,12539,12542 12529-12538, 12540,12542,12543 link
Reco11a-S16 MU   12448 12449,12460,12463 12450-12459, 12461,12462,12464 link
Reco11a-S16 MD (Post Sept TS)   12362 12363,12374,12377 12364-12373, 12375,12376,12378 link
Reco11a-S16 MU (Post Sept TS)   12051 12052,12063,12066 12053 - 12062,12064,12065,12067 link
Reco11a-S16 MD   11891 11892,11903 11893-11902,11904,11905 link
Reco11a-S16 MU   11878 11912,11923 11913-11922,11924,11925 link
Reco11 MD after July TS 11715 11716 11752,11755 11753,11754,11756-11765 link
Reco11 MU after July TS 11367 11368 11553,11564 11554-11563,11565,11566 link

Once you have checked that there are no gross problems with productions, you need to look at the next level down. A few problematic cases can hold up completion of the whole chain. This will typically show up by some of the productions getting stuck at 99.x% complete. This is more involved and is described in the "Compendium of Examples" below.

Hints on the productions monitor page

From the productions monitoring page (e.g. https://lhcb-web-dirac.cern.ch/DIRAC/LHCb-Production/lhcb_prod/jobs/ProductionMonitor/display) you can follow many types of "transformations", where "transformation" is a general name including "job productions" (MC, merge, stripping, reco, reprocessing) and "data movement" (replication, removal) activities. As production shifters, you are mostly interested in "job productions".

Reconstruction, reprocessing, stripping (and its related merge) prods are more important because this is the real data. Here, the single most important view that you want to look is "file status": e.g., click on any Reconstruction "line" and you'll get the menu where "file status" is. This gives you a summary (and also the complete list, if you click further) of the files in input to that production. You'll get file status for everything but MC.

Files can be in:

  • processed: what is done. Basically, jobs treat files, and it happened that the job that treated that file ended successfully. Good, one down!
  • unused: file that are not being treated by any job.
  • assigned: file that are being treated RIGHT NOW by a job.

Basically, this is the cycle:

  1. A prod is created with a Bk query (it can be seen by clicking on "input data query" near to the "file status", and 0 files in.
  2. After some minutes, the Bk query is run, and the files retrieved are assigned to the production, and are all in "unused"
  3. Wait some more minutes and an agent will create "tasks", that will become in the end jobs. Every file assigned to a task is marked as "assigned"
    1. a. You'll see the column "created" of https://lhcb-web-dirac.cern.ch/DIRAC/LHCb-Production/lhcb_prod/jobs/ProductionMonitor/display being populated
    2. b. To be created, some task will require more than an input file. When this is not possible, the agent will just wait. This is the case of, for example, for merge prods.
  4. if the jobs are successful, files are marked as processed, otherwise they are treated by another agent that will remark the file as "unused", and point 3 will be restart.

there are some "special cases", none of which is a good sign:

  • MaxReset: it means that the system has tried to process the file 10 times with ten different jobs, and failed all of them.
  • missingLFC: the file can't be found on the LFC
  • applicationCrash: very rare, sometimes a DIRAC problem.

These last 3 cases should be reported. For the "MaxReset" case, it would be good finding out the jobs that tried to process those files, and report the reason for their failures, but this is quite difficult without a deeper understanding of the system.

Remember: transformation handles tasks, WMS handles jobs. A task lives in DIRAC only, a job on the grid. For transformation that are "job productions", each task will become a job.

Howto: Look at the processed data in the bookeeping

This information is provided by Philippes Dashboard tables

Howto: Monitor general job success rate

Jobs may fail for many reasons.

  • Staging.
  • Stalled Jobs.
  • Segmentation faults.
  • DB access.
  • Software problems.
  • Data access.
  • Shared area access.
  • Site downtime.
  • Problematic files.
  • Excessive runtime.

The shifter should monitor the overall job success rate for all production jobs. If jobs start failing at a single site, and it is not a "known problem" then it may be that a new problem has arisen at that site. If jobs start failing at all sites then it is more likely to be a production or application misconfiguration.

The problem for the shifter is that some of failures are important and some not and some will already be "well known" and some will be new.

  • A common jobs failure minor status is "Input Resolution Errors". This can be transitory when a new production is launched and some disk servers get "overloaded" . However these jobs are resubmitted automatically and if they then run there is not a problem.
  • Jobs which retry too many times will time out. These show up with a final minor status of "Watchdog Identified Jobs as Stalled". These are cause for concern.
  • Jobs have been seen to fail at a site which is known to have its data servers down, but still gets requests for data. This is quite hard for the shifter will observe lots of failed jobs continuing over days, but the problem may well have been reported long ago and remedial work is underway.

A set of monitoring plots has been assembled to help you try to diagnose problems.

If there are lots of production jobs failing then you need to investigate. This may be a new problem associated with current processing of recent runs. Or it may be some manifestation of an old problem. See what site the problem is connected with ? Are the failures associated with an old current production ? Are the failures in reconstruction or merging ? Are the failures re-tries associated with some site which has a known problem ? If it looks like an old problem there will likely be a comment in the logbook.

At this point you may see that, for example, "site-XYZ" is failing lots of jobs in "Merging" with "InputDataResolutionErrors". You probably want to identify which productions/runs these are associated with. Go back to Current production productions to do this. Its not trivial from here as you need to try to identify which production the failures are associated with. On the main productions monitor page you can look in the "failed jobs" column and that might give you a clue. Once you have identified the production can look at the "run status" and also "show jobs" in the pop out menu and try to correlate then with the site-XYZ

Once you have used these monitoring plots the next line of diagnosis depends upon the shifter experience. One has to look at the job log outputs and see if there is any information which helps diagnose the problem. You can also try the site centric monitoring plots (see below)

Don't be afraid to ask the GEOC if in doubt.

Using the CLI to look at failed jobs

Using the CLI, the command:

dirac-production-progress [<Production ID>]
entered without any arguments will return a breakdown of the jobs of all current productions. Entering one or more ProdIDs returns only the breakdown of those productions.

A more detailed breakdown is provided by:

dirac-production-job-summary <Production ID> [<DIRAC Status>]
which also includes the minor status of each job category and provides an example JobID for each category. The example JobIDs can then be used to investigate the failures further.

Beware of failed jobs which have been killed - when a production is complete, the remaining jobs may be automatically killed by DIRAC. Killed jobs like this are ok.

Non-Progressing Jobs

In addition to failed jobs, jobs which do not progress should also be monitored. Particular attention should be paid to jobs in the states ``Waiting'' and ``Staging''. Problematic jobs at this stage are easily overlooked since the associated problems are not easily identifiable.

Non-Starting Jobs

Jobs arriving at a site but then failing to start have multiple causes. One of the most common reasons is that a site is due to enter scheduled downtime and are no longer submitting jobs to the batch queues. Jobs will stay at the site in a ``Waiting'' state and state that there are no CE's available. Multiple jobs in this state should be reported.

Howto: Monitor user job success rate

User jobs may fail because a user has submitted a job with an error (which is not an ops issue) or there may be a problem at a site (which you do need to care about). A set of monitoring plots has been assembled to at least alert the shifter to a problem, and to start the diagnostic process.

A set of monitoring plots has been assembled to help you try to diagnose problems.

* Monitoring Plots : New Overview form Mark

Data Transfer Monitoring

This on needs writing by someone who knows what to describe. For now here are some plots

Space Token Monitoring

Whilst file transfer is a primary shifter function, overall space usage is not as the production and data mangers should be watching this. However it does not hurt for the shifter to be aware of the situation, particularly if it is leading to upload failures. It never hurts to mention obvious space problems at the daily ops meeting.

This is a good link to be aware of - it shows used and free space at all sites:

Howto: Monitor Monte Carlo Production

  • Identify the Productions of interest
  • Use either the Production Monitoring webpage or the Job Monitoring webpage to check the job progress.
  • Check job failures from the productions of interest to ensure they are random/site-specific rather than a problem with the production itself.
  • ...

Howto: Site Centric View

Firstly look at the site summary page

The Sam dashboard shows you the results of Sam availability tests (here is the SAM topic in full in case its any further help).

The general monitoring plots may have already alerted you to failing jobs at a particular site. We have also provided a set of plots centred on each site with a bit more information.

Making Elog Entries

Here is the ELOG.

All Grid Shifter actions of note should be recorded in the ELOG. This had the benefits of allowing new Grid Shifters to familiarise themselves with recent problems with current productions. Elog entries should contain as much relevant information as possible.

A typical ELOG entry for a new problem contains some or all of :

  • The relevant ProdID or ProdIDs.
  • An example JobID.
  • A copy of the relevant error message and output.
  • The application in which the job failed
  • The number of affected jobs.
  • The Grid sites affected.
  • The time of the first and last occurrence of the problem.
Once a problem has been logged it is useful to report the continuing status of the affected productions at the end of each shift.

If Elog is down, send a notification email to lhcb-production@cernNOSPAMPLEASE.ch.

Bug Reporting

You may reach a point where you should submit a bug report. Before submitting a bug report:

  • Identify conditions under which the bug occurs.
  • Record all relevant information.
  • Try to ensure that the bug is reproducible.

Once the shifter is convinced that the behaviour they are experiencing is a bug, they should then prepare to submit a bug report. Users should:

Figure 8: Browse current bugs.

Assuming the bug is new, the procedure to submit a bug report is as follows:

  • Navigate to the ``Support'' tab at the top of the page (Fig. 6) and click on ``submit''.
  • Ensure that the submission webform contains all relevant information (Fig. 9).
  • Set the appropriate severity of the problem.
  • Write a short and clear summary.
  • Set the privacy option to ``private''.
  • Submit the bug report.

Figure 9: Example bug report.

More Details on looking at jobs


A particular job is tagged with the following information:

  • Production Identifier (ProdID), e.g. 00001234 - the 1234$^{th}$ production.
  • Job Idetifier (JobID), e.g. 9876 - the 9876th job in the DIRAC system.
  • JobName, e.g. 00001234_00000019 - the 19th job in production 00001234.

Job Status

The job status of a successful job proceeds in the following order:

  1. Received,
  2. Checking,
  3. Staging,
  4. Waiting,
  5. Matched,
  6. Running,
  7. Completed,
  8. Done.

Jobs which return no heartbeat have a status of ``Stalled'' and jobs where any workflow modules return an error status are classed as ``Failed''.

The basic flowchart describing the evolution of a job's status can be found in figure 1. Jobs are only ``Grid-active'' once they have reached the ``Matched'' phase.

Job status flowchart. Note that the ``Checking'' and ``Staging'' status are omitted.

Figure 1: Job status flowchart. Note that the ``Checking'' and ``Staging'' status are omitted.

Job Output

The standard output and standard error of a job can be accessed through the API, the CLI and the webpage via a global job ``peek''.

Job Output via the CLI

The std.out and std.err for a given job can be retrieved using the CLI command:

dirac-wms-job-get-output <JobID> | [<JobID>]
This creates a directory containing the std.out and std.err for each JobID entered. Standard tools can then be used to search the output for specific strings, e.g. ``FATAL''.

To simply view the last few lines of a job's std.out (``peek'') use:

dirac-wms-job-peek <JobID> | [<JobID>]

Job Output via the Job Monitoring Webpage

There are two methods to view the output of a job via the Job Monitoring Webpage. The first returns the last 20 lines of the std.out and the second allows the Grid Shifter to view all the output files.

Figure 2: Peek the std.out of a job via the Job Monitoring Webpage.

To ``peek'' the std.out of a job:

  1. Navigate to the Job Monitoring Webpage.
  2. Select the relevant filters from the left panel.
  3. Click on a job.
  4. Select ``StandardOutput'' (Fig. 2).

Figure 3: View all the output files of a job via the Job Monitoring Webpage.

Similarly, to view all output files for a job:

  1. Navigate to the Job Monitoring Webpage.
  2. Select the relevant filters from the left panel.
  3. Click on a job.
  4. Select ``Get Logfile'' (Fig. 3).

This method can be particularly quick if the Grid Shifter only wants to check the output of a selection of jobs.

Job Pilot Output

The output of the Job Pilot can also be retrieved via the API, the CLI or the Webpage.

Job Pilot Output via the CLI

To obtain the Job Pilot output using the CLI, use:

dirac-admin-get-pilot-output <Grid pilot reference> [<Grid pilot reference>]
This creates a directory for each JobID containing the Job Pilot output.

Job Pilot Output via the Job Monitoring Webpage

Viewing the std.out and std.err of a Job Pilot via the Job Monitoring Webpage is achieved by:

  1. Navigate to the Job Monitoring Webpage.
  2. Select the relevant filters from the left panel.
  3. Click on a job.
  4. Select ``Pilot'' then ``Get StdOut'' or ``Get StdErr'' (Fig. 4).

Figure 4: View the pilot output of a job via the Job Monitoring Webpage.


Compendium of Example Procedures To Address Problems.


It is hoped that shifters and other experts will contribute to this section and build it up

To make it easy it is arranged so that you add a separate "topic" for each example problem you contribute. You do not need to add text in the main body of this document.

Edit this page and add a bullet under the example list below. Add only a few few words to briefly describe the example you are adding. Keep it short - leave the details out . You finish this bullet with the following: "Go to ShifterGuideExamplexxxxxxxx" (where you replace xxxxxxx with your own topic title). You then exit and save. You can now click on your topic title which will be highlighted in red. This will take to you a fresh area where you can write what you like.

Example List


Preliminary Things & Background Information


Grid Certificates

A Grid certificate is mandatory for Grid Shifters. If you don't have a certificate you should register for one through CERN LCG and apply to join the LHCb Virtual Organisation (VO).

To access the production monitoring webpages you will also need to load your certificate into your browser. Detailed instructions on how to do this can be found on the CERN LCG pages.

The new shifter should:

Grid Sites

Jobs submitted to the Grid will be scheduled to run at one of a number of Grid sites. The exact site at which a job is executed depends on the job requirements and the current status of all relevant grid sites. Grid sites are grouped into two tiers, Tier-1 and Tier-2. CERN is an exception, because it is also responsible for processing and archiving the RAW experimental data it is also referred to as a Tier-0 site.

Tier-1 Sites

Tier-1 sites are used for Analysis, Monte Carlo production, file transfer and file storage in the LHCb Computing Model.

Tier-2 Sites

There are numerous Tier-2 sites with sites being added frequently. As such, it is of little worth presenting a list of all the current Tier-2 sites in this document. Tier-2 sites are used for MC production in the LHCb Computing Model.

Backend Storage Systems

Three backend storage technologies are employed at the Tier-1 sites, Castor and dCache. The Tier-1 sites which utilise each technology choice are summarised in the table below:

Backend Storage Tier-1 Site
Castor CERN, RAL
dCache IN2P3, NIKHEF, GridKa, PIC
Storm CNAF

File Transfer System, FTS

Many of the LHCb data transfers are done under the auspices of the central FTS service provided at CERN and the other T1 centres. It is possible to monitor what is going with the transfers through these services by the following links at the Tier-1 sites.

DIRAC Scripts

DIRAC Scripts



Site Downtime Calendar

The calendar [6] displays all the sites with scheduled and unscheduled downtime. Calendar entries are automatically parsed through the site downtime RSS feed and added to the calendar.

Occasionally the feed isn't parsed correctly and Grid Shifters should double-check that the banned and allowed sites are correct. Useful scripts for this are:


DIRAC 3 Scripts

DIRAC Admin Scripts

  • dirac-admin-accounting-cli
  • dirac-admin-add-user
  • dirac-admin-allow-site
  • dirac-admin-ban-site
  • dirac-admin-delete-user
  • dirac-admin-get-banned-sites
  • dirac-admin-get-job-pilot-output
  • dirac-admin-get-job-pilots
  • dirac-admin-get-pilot-output
  • dirac-admin-get-proxy
  • dirac-admin-get-site-mask
  • dirac-admin-list-hosts
  • dirac-admin-list-users
  • dirac-admin-modify-user
  • dirac-admin-pilot-summary
  • dirac-admin-reset-job
  • dirac-admin-service-ports
  • dirac-admin-site-info
  • dirac-admin-sync-users-from-file
  • dirac-admin-upload-proxy
  • dirac-admin-users-with-proxy

DIRAC Bookkeeping Scripts

  • dirac-bookkeeping-eventMgt
  • dirac-bookkeeping-eventtype-mgt
  • dirac-bookkeeping-ls
  • dirac-bookkeeping-production-jobs
  • dirac-bookkeeping-production-informations


  • dirac-clean

DIRAC Configuration

  • dirac-configuration-cli

DIRAC Distribution

  • dirac-distribution


  • dirac-dms-add-file
  • dirac-dms-get-file
  • dirac-dms-lfn-accessURL
  • dirac-dms-lfn-logging-info
  • dirac-dms-lfn-metadata
  • dirac-dms-lfn-replicas
  • dirac-dms-pfn-metadata
  • dirac-dms-pfn-accessURL
  • dirac-dms-remove-pfn
  • dirac-dms-remove-lfn
  • dirac-dms-replicate-lfn

DIRAC Embedded

  • dirac-embedded-external

DIRAC External

  • dirac-external


  • dirac-fix-ld-library-path

DIRAC Framework

  • dirac-framework-ping-service

DIRAC Functions

  • dirac-functions.sh


  • dirac-group-init

DIRAC Jobexec

  • dirac-jobexec


  • dirac-lhcb-job-replica
  • dirac-lhcb-manage-software
  • dirac-lhcb-production-job-check
  • dirac-lhcb-sam-submit-all
  • dirac-lhcb-sam-submit-ce

DIRAC Myproxy

  • dirac-myproxy-upload

DIRAC Production

  • dirac-production-application-summary
  • dirac-production-change-status
  • dirac-production-job-summary
  • dirac-production-list-active
  • dirac-production-list-all
  • dirac-production-list-id
  • dirac-production-logging-info
  • dirac-production-mcextend
  • dirac-production-manager-cli
  • dirac-production-progress
  • dirac-production-set-automatic
  • dirac-production-set-manual
  • dirac-production-site-summary
  • dirac-production-start
  • dirac-production-stop
  • dirac-production-submit
  • dirac-production-summary


  • dirac-proxy-info
  • dirac-proxy-init
  • dirac-proxy-upload

DIRAC Update

  • dirac-update


  • dirac-wms-job-delete
  • dirac-wms-job-get-output
  • dirac-wms-job-get-input
  • dirac-wms-job-kill
  • dirac-wms-job-logging-info
  • dirac-wms-job-parameters
  • dirac-wms-job-peek
  • dirac-wms-job-status
  • dirac-wms-job-submit
  • dirac-wms-job-reschedule

Common Acronyms

Access Control Lists
Application Programming Interface
Advance Resource Connector
A Realisation of Distributed Analysis
Berkeley Database Information Index
Batch Object Submission System
Certification Authority
CDF Central Analysis Farm
Common Computing Readiness Challenge
Collider Detector at Fermilab
Computing Element
Organisation Européenne pour la Recherche Nucléaire: Switzerland/France
Centro Nazionale per la Ricerca e Svilupponelle Tecnologie Informatiche e Telematiche: Italy
Conditions Database
Central Processing Unit
Certifcate Revocation List
Confguration Service
Directed Acyclic Graph
Data Challenge 2004
Data Challenge 2006
Data Link Switching Client Access Protocol
Distributed Interactive Analysis of Large datasets
Distributed Infrastructure with Remote Agent Control
DIRAC Secure Transport
Data Location Interface
Dynamically Linked Libraries
Distinguished Name
Domain Name System
Data Replication Service
Data Summary Tape
Electromagnetic CALorimeter
Enterprise Grid Alliance
Enabling Grids for E-sciencE
Electronic Log
Event Tag Collection
First In First Out
File Transfer Service
Global Access to Secondary Storage
Grid File Access Library
Global Grid Forum
Grid Index Information Service
Grid Laboratory Uniform Environment
Grid Resource Allocation Manager
Grid File Transfer Protocol
Grid Computing Centre Karlsruhe
Grid Physics Network
Grid Resource Information Server
Grid Security Infrastructure
Globus Toolkit
Graphical User Interface
Globally Unique IDentifer
Hadron CALorimeter
High Energy Physics
High Level Trigger
Hyper-Text Markup Language
Hyper-Text Transfer Protocol
Institut National de Physique Nucleaire et de Physique des Particules: France
International Virtual Data Grid Laboratory
Job Description Language
Job Database
Job Identifer
Level 0
Local Area Network
LHC Computing Grid
LCG Information System
LCG User Interface
LCG Workload Management System
Lightweight Directory Access Protocol
LCG File Catalogue
Logical File Name
Large Hadron Collider
Large Hadron Collider beauty
Load Share Facility
Monte Carlo
Monitoring and Discovery Service
Mass Storage System
National Institute for Subatomic Physics: Netherlands
Open Grid Services Architecture
Open Grid Services Infrastructure
Open Science Grid
Production ANd Distributed Analysis
Personal Computer
Physics Data Challenge
Physical File Name
Port d’Informació Cientfca: Spain
Public Key Infrastructure
Pool Of persistent Ob jects for LHC
Portable Operating System Interface
Particle Physics Data Grid
Production Identifer
Preshower Detector
Relational Grid Monitoring Architecture
Rutherford-Appleton Laboratory: UK
Resource Broker
reduced Data Summary Tape
Remote File Input/Output
Ring Imaging CHerenkov
Replica Manager
Remote Procedure Call
Real Time Trigger Challenge
Service Availability Monitoring
Storage Element
Service Oriented Architecture
Simple Ob ject Access Protocol
Scintillator Pad Detector
Storage Resource Manager
Secure Socket Layer
Storage URL
Transmission Control Protocol / Internet Protocol
Transient Detector Store
Transient Event Store
Transient Histogram Store
Trigger Tracker
Transport URL
Uniform Resource Locator
Virtual Data Toolkit
VErtex LOcator
Virtual Organisation
Virtual Organisation Membership Service
Wide Area Network
Workload Management System
Worker Node
Web Services Description Language
Web Services Resource Framework
World Wide Web
eXtensible Markup Language
XML Remote Procedure Call

-- PaulSzczypka - 14 Aug 2009

-- PeterClarke - 19-Oct-2010

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng dirac-primary-states.png r1 manage 109.0 K 2010-11-14 - 13:33 PeterClarke Dirac states diagram
PNGpng get_logfiles.png r1 manage 37.8 K 2010-11-14 - 13:34 PeterClarke  
PNGpng get_pilot_output.png r1 manage 42.8 K 2010-11-14 - 13:35 PeterClarke  
PNGpng get_std_out.png r1 manage 37.6 K 2010-11-14 - 13:35 PeterClarke  
Edit | Attach | Watch | Print version | History: r43 < r42 < r41 < r40 < r39 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r43 - 2012-11-12 - StefanRoiser
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback