TWiki
>
Main Web
>
TWikiUsers
>
BerndPanzerSteindel
>
CpuResourcePlanning
(revision 34) (raw view)
Edit
Attach
PDF
%TOC{title="Contents:"}% ---++ CPU Resource Table This page is for the IT internal planning of CPU resources for the LHC experiments, non-LHC experiments and various other groups. We have now 7 different areas of CPU resource allocations: LSF public share, LSF dedicated queues/resources, LSF extra instance, openstack compute, openstack services, condor and dedicated physical VOBoxes. The unit for all CPU processing numbers is HS06. I am assuming an error bar of about 5% for the presented numbers. | *Experiment/Group* | *status Nov 2018* | | *pledge April 2019* | *pledge April 2020* | | | [HS06] | | [HS06] | [HS06] | |ALICE | 350000 | | 350000 | 350000 | |ATLAS | 411000 | | 411000 | 411000 | |CMS | 423000 | | 423000 | 423000 | |LHCb | 86000 | | 86000 | 96000 | | | | | | | |TOTEM | 10000 | | 10000 | 10000 | |LHCf | 2000 | | 2000 | 2000 | | | | | | | |ATLAS T3 CERN | 7800 | | 11000 | 11000 | |CMS T3 CERN | 9400 | | 14000 | 14000 | |LHCb T3 CERN | 4800 | | 7000 | 7000 | | | | | | | |ATLAS T3 Wisconsin | 7000 | | 7000 | 7000 | |ATLAS T3 Tokyo | 9000 | | 9000 | 9000 | | | | | | | |COMPASS | 54000 | | 55000 | 55000 | |AMS | 100000 | | 100000 | 100000 | |Theory | 70000 | | 70000 | 70000 | |NA48+NA62 | 50000 | | 50000 | 50000 | |NA61 | 10000 | | 10000 | 10000 | |Ship | 10000 | | 10000 | 10000 | |NP02 | 7500 | | 7500 | 7500 | |NP04 | 7500 | | 7500 | 7500 | |other groups/exp. | 50000 | | 50000 | 50000 | | | | | | | |Sum | 1679000 | | 1689000 | 1689000 | UNOSAT,SFT,OPERA,OPAL,L3,ALEPH,DELPHI,NA45,NOMAD,NA38NA50,ITDC,AMSP,etc. have just 4 HS04 in the LSF share Resources for HPC (e.g. Theory QCD, CFD, BE parallel) are not included in the previous table. Also not included are service resources for IT, EN, GS and BE. ---++ Resource details per experiment The following table shows the split of the pledges resources across 6 different areas for the 4 LHC experiments. the openstack number are always the quotas and not the used number of cores/HS06 The unit for all CPU processing numbers is [HS06], for openstack the mapping is 1 core = 10 HS06 *ALICE* | *Resource type* | *Nov 2018* | | | [HS06] | | openstack services | 13100 | | condor | 336900 | | | | | total sum | 350000 | *ATLAS* | *Resource type* | *Nov 2018* | | | [HS06] | | LSF dedicated T0 instance | 210000 | | openstack services | 72800 | | condor | 128200 | | | | | total sum | 411000 | *CMS* | *Resource type* | *Nov 2018* | | | [HS06] | | condor dedicated T0 instance | 302000 | | openstack services | 52800 | | condor | 68200 | | | | | total sum | 423000 | *LHCb* | *Resource type* | *Nov 2018* | | | [HS06] | | openstack services | 16000 | | condor | 70000 | | | | | total sum | 86000 | ---++ Group codes and experiments * [[%ATTACHURL%/group_code_mapping.txt][group_code_mapping.txt]]: linux group code mappings ---++ Policies *1.* The pledged CPU resources for the LHC experiments can be split between the different areas (Voboxes, Shares, dedicated, openstack), but the total pledge is constant. *2.* The openstack resources are calculated via the quotas, not the used resources. ---++ History and Changes ----------------------------------------------------------------------------------------------------------------------------------------------------
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
txt
group_code_mapping.txt
r1
manage
16.3 K
2014-05-18 - 12:55
BerndPanzerSteindel
linux group code mappings
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r38
|
r36
<
r35
<
r34
<
r33
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r34 - 2018-12-03
-
BerndPanzerSteindel
Log In
Main
Home
Index
Search
User Search
Changes
Notifications
RSS Feed
Documentation
Support
Webs
Main
Main Archive
Plugins
Sandbox for tests
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Welcome Guest
Login
or
Register
Cern Search
TWiki Search
Google Search
Main
All webs
Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback