Difference: USCMSTier2Deployment (1 vs. 150)

Revision 1502019-09-19 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 41 to 41
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 100408 9096 4548 4120 100 06/27/18
T2_US_Florida 143942 8398 8398 4076 100 05/15/19
Changed:
<
<
T2_US_MIT 115019 10568 10568 4000 100 11/09/19
>
>
T2_US_MIT 110917 10616 10616 4000 100 11/09/19
 
T2_US_Nebraska 112395 10304 5152 5000 100 08/08/19
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10757 5456 3328 80 6/24/19

Revision 1492019-09-11 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 41 to 41
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 100408 9096 4548 4120 100 06/27/18
T2_US_Florida 143942 8398 8398 4076 100 05/15/19
Changed:
<
<
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
>
>
T2_US_MIT 115019 10568 10568 4000 100 11/09/19
 
T2_US_Nebraska 112395 10304 5152 5000 100 08/08/19
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10757 5456 3328 80 6/24/19

Revision 1482019-08-08 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 
T2_US_Caltech 100408 9096 4548 4120 100 06/27/18
T2_US_Florida 143942 8398 8398 4076 100 05/15/19
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
Changed:
<
<
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
>
>
T2_US_Nebraska 112395 10304 5152 5000 100 08/08/19
 
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10757 5456 3328 80 6/24/19
T2_US_Wisconsin 128151 12888 6444 4100 100 02/04/19

Revision 1472019-06-25 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 44 to 44
 
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
Changed:
<
<
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
>
>
T2_US_UCSD 122247 10757 5456 3328 80 6/24/19
 
T2_US_Wisconsin 128151 12888 6444 4100 100 02/04/19
Total HEP Sites 128151 12888 6444 4100    
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17

Revision 1462019-05-23 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 40 to 40
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 100408 9096 4548 4120 100 06/27/18
Changed:
<
<
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
>
>
T2_US_Florida 143942 8398 8398 4076 100 05/15/19
 
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18

Revision 1452019-05-17 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 62 to 62
 
Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted FTE+
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732 0.20
Changed:
<
<
T2_US_Florida 5517 3669 199 2 28 2571 193 1200 0 0 3669 2571 0.10
>
>
T2_US_Florida 5923 3939 199 2 28 2750 207 1189 0 0 3939 2750 0.10
 
T2_US_MIT 8000 3800 640 0 208 3641 200 159 0 0 3800 3641 1.00
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008 0.50
T2_US_Purdue 8806 3721 194 230 94 3244 413 406 0 41 3721 3244 0.23

Revision 1442019-05-16 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 62 to 62
 
Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted FTE+
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732 0.20
Changed:
<
<
T2_US_Florida 5923 4146 199 2 28 2571 207 1575 0 0 4146 2571 0.10
>
>
T2_US_Florida 5517 3669 199 2 28 2571 193 1200 0 0 3669 2571 0.10
 
T2_US_MIT 8000 3800 640 0 208 3641 200 159 0 0 3800 3641 1.00
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008 0.50
T2_US_Purdue 8806 3721 194 230 94 3244 413 406 0 41 3721 3244 0.23
T2_US_UCSD 5500 2000 168 348 30 1572 444 432 1000 60 3060 2546 0.00
Changed:
<
<
T2_US_Wisconsin 8200 3800 555 13 83 2650 260 1150 0 50 3850 2650 0.05
>
>
T2_US_Wisconsin 8200 3590 555 13 83 2650 510 740 0 50 3640 2650 0.05
 
Totals                          

  • "Raw disk" = all un-replicated disk, purchased with Tier-2 funds and still in use

Revision 1432019-05-15 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 63 to 63
 
Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted FTE+
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732 0.20
T2_US_Florida 5923 4146 199 2 28 2571 207 1575 0 0 4146 2571 0.10
Changed:
<
<
T2_US_MIT 8000 4000 640 0 208 3641   359 0 0 4000 3641 1.00
>
>
T2_US_MIT 8000 3800 640 0 208 3641 200 159 0 0 3800 3641 1.00
 
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008 0.50
T2_US_Purdue 8806 3721 194 230 94 3244 413 406 0 41 3721 3244 0.23
T2_US_UCSD 5500 2000 168 348 30 1572 444 432 1000 60 3060 2546 0.00
Line: 87 to 87
 
  • Site Notes:
    • Caltech has a 15% buffer for hadoop. There is an empty 300TB ceph storage system included in the total but not in free space in the main storage system. They had 0.2 FTE from campus computing in 2018 for the admin transition
    • Florida uses a RAID system so that the ratio of Raw to usable space is 720/504. A buffer of approximately 5% free space is needed for good performance. Approximately 0.1 FTE non-costed labor. Site has also ordered an additional 500TB usable for 2019, partly with leftover 2018 funds. Did not ask for /store/unmerged but the /store number is correct. Some numbers were reported to us in TiB not TB and are adjusted in the table.
Changed:
<
<
    • MIT: According to PhEDEx, there are 3,520 TB of data hosted of which 627 TB is in the heavy-ions group which should not be counted for the HEP program, leaving 2,893 TB. /store/user folder is 1700 TB of usable storage. HI users take 1060 TB the rest (640TB) is in HEP. Heavy Ion purchased 2544 TB of usable storage, 5088 TB raw. the only US Tier-2 center of LHCb will be included into our setup in the next months. The purchase (~$150k) is being prepared and should go out this week. This will add opportunistic resources.
>
>
    • MIT: According to PhEDEx, there are 3,520 TB of data hosted of which 627 TB is in the heavy-ions group which should not be counted for the HEP program, leaving 2,893 TB. /store/user folder is 1700 TB of usable storage. HI users take 1060 TB the rest (640TB) is in HEP. Heavy Ion purchased 2544 TB of usable storage, 5088 TB raw. the only US Tier-2 center of LHCb will be included into our setup in the next months. The purchase (~$150k) is being prepared and should go out this week. This will add opportunistic resources. Normally they do not want to go over 95% filling the storage. Right now at 81% full.
 
    • Nebraska: [not confirmed] For things like ancillary system administration, network maintenance and operation, assistance with opportunistic resources at HCC, and other primarily personnel contributions not paid for by CMS, 0.5 FTE is a reasonable guess. Operational buffer is 9.5% of the main storage system. Replication factor of 2.19 is slightly higher than 2x since they use treble replication in some spaces like unmerged. "Other" includes 55TB for LIGO, 20TB for Brian, 6TB for dteam, and 4TB for cvmfs, plus about 123TB in unmerged replicated 3x, so
    • Purdue: Has 60TB of raw disk not deployed that is under warranty. Performance buffer is usually 10-15%. We estimate ~0.23 FTE of support from different research computing personnel whose salaries are not covered by the operations program. Most notable is the 0.1 FTE contribution of Laura Theademan, a research computing program manager who provides CMS project management support. Other contributions include HPC engineers (0.05 FTE) and Data Center Management (0.05 FTE) teams supporting the Community Cluster program. We also receive support from our networking department (0.02 FTE) and research computing user support staff (0.01 FTE). Purdue CMS equipment purchases are exempt from facilities and administrative costs.
    • UCSD has 60TB of raw disk for transient user space, listed under "other". There is a 10% free disk buffer for performance. Includes 240 TB of raw disk not fully deployed in the main storage system but purchased with 2018 funds.

Revision 1422019-05-14 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 60 to 60
  All numbers are in units of TB = 10^12 bytes. Total capacity for hosting data in the main storage system should add up to the sum total usage in /store plus free space available for writing data. Total capacity includes the capacity for hosting data in the main storage system plus caches and any other space.
Changed:
<
<
Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732
T2_US_Florida 5923 4146 199 2 ??? 2571 207 1575 0 0 4146 2571
T2_US_MIT                 0      
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008
T2_US_Purdue                 0      
T2_US_UCSD 5500 2000 168 348 30 1572 444 432 1000 60 3060 2546
T2_US_Wisconsin 8200 3800 555 13 ??? 2650 260 1150 0 50 3850 2650
Totals 34604 16436 1095 563 193 12129 1797 4015 1330 184 18176 13507
>
>
Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted FTE+
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732 0.20
T2_US_Florida 5923 4146 199 2 28 2571 207 1575 0 0 4146 2571 0.10
T2_US_MIT 8000 4000 640 0 208 3641   359 0 0 4000 3641 1.00
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008 0.50
T2_US_Purdue 8806 3721 194 230 94 3244 413 406 0 41 3721 3244 0.23
T2_US_UCSD 5500 2000 168 348 30 1572 444 432 1000 60 3060 2546 0.00
T2_US_Wisconsin 8200 3800 555 13 83 2650 260 1150 0 50 3850 2650 0.05
Totals 51410 24157 1929 793 606 19014 2210 4780 1330 225 25897 20392 2.08
 
  • "Raw disk" = all un-replicated disk, purchased with Tier-2 funds and still in use
  • "Capacity" = usable space in the main storage element, after subtracting any operationally necessary buffers
  • "User" = usage in /store/user namespace
  • "Group" = usage in /store/group namespace
  • "Unmerged" = usage in /store/unmerged and /store/temp namespaces. Not broken out in some cases but the /store number is correct.
Changed:
<
<
  • "/store" = total PhEDEx space plus /store/user, /store/group, and /store/unmerged, i.e. all of /store namespace.
>
>
  • "/store" = total PhEDEx space plus /store/user, /store/group, and /store/unmerged, i.e. all of /store namespace.
 
  • "Buffer" = amount of free replicated disk space needed for good performance of the storage system
  • "Free" = free space actually available for writing new replicated data in the main storage system
  • "Caches" = total space in xrootd caches (generally not replicated)
  • "Other" = see explanation below, typically user space not under /store.
  • "TOTAL" = Storage capacity in the main storage system plus any caches
  • "Hosted" = total in /store and any caches
Changed:
<
<
I forgot to ask about /store/unmerged ...
>
>
  • FTE+* = count of any un-costed labor which supports the Tier-2 program but not paid for by the operations budget
 
  • Site Notes:
    • Caltech has a 15% buffer for hadoop. There is an empty 300TB ceph storage system included in the total but not in free space in the main storage system. They had 0.2 FTE from campus computing in 2018 for the admin transition
    • Florida uses a RAID system so that the ratio of Raw to usable space is 720/504. A buffer of approximately 5% free space is needed for good performance. Approximately 0.1 FTE non-costed labor. Site has also ordered an additional 500TB usable for 2019, partly with leftover 2018 funds. Did not ask for /store/unmerged but the /store number is correct. Some numbers were reported to us in TiB not TB and are adjusted in the table.
Changed:
<
<
    • MIT will answer on Monday May 13
>
>
    • MIT: According to PhEDEx, there are 3,520 TB of data hosted of which 627 TB is in the heavy-ions group which should not be counted for the HEP program, leaving 2,893 TB. /store/user folder is 1700 TB of usable storage. HI users take 1060 TB the rest (640TB) is in HEP. Heavy Ion purchased 2544 TB of usable storage, 5088 TB raw. the only US Tier-2 center of LHCb will be included into our setup in the next months. The purchase (~$150k) is being prepared and should go out this week. This will add opportunistic resources.
 
    • Nebraska: [not confirmed] For things like ancillary system administration, network maintenance and operation, assistance with opportunistic resources at HCC, and other primarily personnel contributions not paid for by CMS, 0.5 FTE is a reasonable guess. Operational buffer is 9.5% of the main storage system. Replication factor of 2.19 is slightly higher than 2x since they use treble replication in some spaces like unmerged. "Other" includes 55TB for LIGO, 20TB for Brian, 6TB for dteam, and 4TB for cvmfs, plus about 123TB in unmerged replicated 3x, so
Added:
>
>
    • Purdue: Has 60TB of raw disk not deployed that is under warranty. Performance buffer is usually 10-15%. We estimate ~0.23 FTE of support from different research computing personnel whose salaries are not covered by the operations program. Most notable is the 0.1 FTE contribution of Laura Theademan, a research computing program manager who provides CMS project management support. Other contributions include HPC engineers (0.05 FTE) and Data Center Management (0.05 FTE) teams supporting the Community Cluster program. We also receive support from our networking department (0.02 FTE) and research computing user support staff (0.01 FTE). Purdue CMS equipment purchases are exempt from facilities and administrative costs.
 
    • UCSD has 60TB of raw disk for transient user space, listed under "other". There is a 10% free disk buffer for performance. Includes 240 TB of raw disk not fully deployed in the main storage system but purchased with 2018 funds.
    • Wisconsin has 50 TB of replicated disk for transient user space, listed under "other". The un-costed labour that supports the site amounts to 0.05 - 0.075 FTE.
Deleted:
<
<
Total Un-costed labor: 0.2+0.1+0.5+0.05 = 0.85 FTE (five sites reporting)
 

Opportunistic Computing Resources

Information from Sites

Revision 1412019-05-11 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 39 to 39
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 100408 9096 4548 3838 100 06/27/18
>
>
T2_US_Caltech 100408 9096 4548 4120 100 06/27/18
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
Line: 51 to 51
 Notes:
  • Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
  • Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
Changed:
<
<
  • Caltech has 6.707 TB raw disk space for hosting (3,35 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance. (CEPH Instance Replication is changing and for acounting purposes factor of 2 is taken. We test different errasure codings and this makes always different available pspace)
>
>
  • Caltech Total Available storage is 4120 TB. 7281 TB RAW disk space for HDFS (3640 TB usable - replication 2). In addition to HDFS, Caltech maintains 330 TB xrootd cache (no replication) and a 300 TB Ceph development instance. (CEPH Instance Replication is changing and for acounting purposes factor of 2 is taken. We test different errasure codings and this makes always different available space. This space is not used for Ops). Overall usable 3640 + 330 + 150 = 4120TB. Overall RAW 7281 + 330 + 300 = 7911 TB RAW
 
  • UCSD has a flexible replication scheme, plus a 288 TB xrootd cache which is in production.

<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->

Revision 1402019-05-11 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 56 to 56
 
<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
Added:
>
>

Storage breakdown study - May 2019

All numbers are in units of TB = 10^12 bytes. Total capacity for hosting data in the main storage system should add up to the sum total usage in /store plus free space available for writing data. Total capacity includes the capacity for hosting data in the main storage system plus caches and any other space.

Site Raw Disk Capacity User Group Unmerged /store Buffer Free Caches Other TOTAL Hosted
T2_US_Caltech 7911 3090 27 92 40 2402 550 588 330 0 3720 2732
T2_US_Florida 5923 4146 199 2 ??? 2571 207 1575 0 0 4146 2571
T2_US_MIT                 0      
T2_US_Nebraska 7070 3400 146 108 123 2934 336 270 0 74 3400 3008
T2_US_Purdue                 0      
T2_US_UCSD 5500 2000 168 348 30 1572 444 432 1000 60 3060 2546
T2_US_Wisconsin 8200 3800 555 13 ??? 2650 260 1150 0 50 3850 2650
Totals 34604 16436 1095 563 193 12129 1797 4015 1330 184 18176 13507

  • "Raw disk" = all un-replicated disk, purchased with Tier-2 funds and still in use
  • "Capacity" = usable space in the main storage element, after subtracting any operationally necessary buffers
  • "User" = usage in /store/user namespace
  • "Group" = usage in /store/group namespace
  • "Unmerged" = usage in /store/unmerged and /store/temp namespaces. Not broken out in some cases but the /store number is correct.
  • "/store" = total PhEDEx space plus /store/user, /store/group, and /store/unmerged, i.e. all of /store namespace.
  • "Buffer" = amount of free replicated disk space needed for good performance of the storage system
  • "Free" = free space actually available for writing new replicated data in the main storage system
  • "Caches" = total space in xrootd caches (generally not replicated)
  • "Other" = see explanation below, typically user space not under /store.
  • "TOTAL" = Storage capacity in the main storage system plus any caches
  • "Hosted" = total in /store and any caches

I forgot to ask about /store/unmerged ...

  • Site Notes:
    • Caltech has a 15% buffer for hadoop. There is an empty 300TB ceph storage system included in the total but not in free space in the main storage system. They had 0.2 FTE from campus computing in 2018 for the admin transition
    • Florida uses a RAID system so that the ratio of Raw to usable space is 720/504. A buffer of approximately 5% free space is needed for good performance. Approximately 0.1 FTE non-costed labor. Site has also ordered an additional 500TB usable for 2019, partly with leftover 2018 funds. Did not ask for /store/unmerged but the /store number is correct. Some numbers were reported to us in TiB not TB and are adjusted in the table.
    • MIT will answer on Monday May 13
    • Nebraska: [not confirmed] For things like ancillary system administration, network maintenance and operation, assistance with opportunistic resources at HCC, and other primarily personnel contributions not paid for by CMS, 0.5 FTE is a reasonable guess. Operational buffer is 9.5% of the main storage system. Replication factor of 2.19 is slightly higher than 2x since they use treble replication in some spaces like unmerged. "Other" includes 55TB for LIGO, 20TB for Brian, 6TB for dteam, and 4TB for cvmfs, plus about 123TB in unmerged replicated 3x, so
    • UCSD has 60TB of raw disk for transient user space, listed under "other". There is a 10% free disk buffer for performance. Includes 240 TB of raw disk not fully deployed in the main storage system but purchased with 2018 funds.
    • Wisconsin has 50 TB of replicated disk for transient user space, listed under "other". The un-costed labour that supports the site amounts to 0.05 - 0.075 FTE.

Total Un-costed labor: 0.2+0.1+0.5+0.05 = 0.85 FTE (five sites reporting)

 

Opportunistic Computing Resources

Information from Sites

Revision 1392019-03-21 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 45 to 45
 
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
Changed:
<
<
T2_US_Wisconsin 128151 12888 6444 4075 100 02/04/19
>
>
T2_US_Wisconsin 128151 12888 6444 4100 100 02/04/19
 
Total HEP Sites            
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
Notes:

Revision 1382019-03-11 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 39 to 39
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 88000 8316 4158 3838 100 06/27/18
>
>
T2_US_Caltech 100408 9096 4548 3838 100 06/27/18
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
Line: 51 to 51
 Notes:
  • Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
  • Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
Changed:
<
<
  • Caltech has 6.417 TB raw disk space for hosting (3,208 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
>
>
  • Caltech has 6.707 TB raw disk space for hosting (3,35 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance. (CEPH Instance Replication is changing and for acounting purposes factor of 2 is taken. We test different errasure codings and this makes always different available pspace)
 
  • UCSD has a flexible replication scheme, plus a 288 TB xrootd cache which is in production.

<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->

Revision 1372019-02-04 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 45 to 45
 
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
Changed:
<
<
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
>
>
T2_US_Wisconsin 128151 12888 6444 4075 100 02/04/19
 
Total HEP Sites            
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
Notes:

Revision 1362018-10-30 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 
T2_US_Caltech 88000 8316 4158 3838 100 06/27/18
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
Changed:
<
<
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
>
>
T2_US_Nebraska 112395 10304 5152 3600 100 10/30/18
 
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18

Revision 1352018-10-07 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 39 to 39
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 88000 8316 4158 3521 100 06/27/18
>
>
T2_US_Caltech 88000 8316 4158 3838 100 06/27/18
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
Line: 51 to 51
 Notes:
  • Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
  • Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
Changed:
<
<
  • Caltech has 5.782 TB raw disk space for hosting (2,891 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
>
>
  • Caltech has 6.417 TB raw disk space for hosting (3,208 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
 
  • UCSD has a flexible replication scheme, plus a 288 TB xrootd cache which is in production.

<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
Line: 118 to 118
 
Site Tools
T2_BR_SPRACE  
T2_BR_UERJ Kickstart and Ansible(Red Hat's resources)
Changed:
<
<
T2_US_Caltech Foreman 1.2 and Puppet 3.8
>
>
T2_US_Caltech Foreman 1.16 and Puppet 5
 
T2_US_Florida Florida HiperGator SIS system (Image)
T2_US_MIT  
T2_US_Nebraska Cobbler 2.6 and Puppet 4.8 (opensource)

Revision 1342018-10-05 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
Changed:
<
<
T2_US_Purdue 117088 7784 5592 3900 100 10/05/18
>
>
T2_US_Purdue 117088 7724 5592 3900 100 10/05/18
 
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
Total HEP Sites 245015 23312 5456 6928    

Revision 1332018-10-05 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
Changed:
<
<
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18
>
>
T2_US_Purdue 117088 7784 5592 3900 100 10/05/18
 
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
Total HEP Sites 245015 23312 5456 6928    

Revision 1322018-06-27 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 39 to 39
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 88000 8316 4158 2739 100 01/30/18
>
>
T2_US_Caltech 88000 8316 4158 3521 100 06/27/18
 
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
Line: 52 to 52
 Notes:
  • Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
  • Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
Changed:
<
<
  • Caltech has 4,350 TB raw disk space for hosting (2,175 TB replicated), plus a 264 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
>
>
  • Caltech has 5.782 TB raw disk space for hosting (2,891 TB replicated), plus a 330 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
 
  • UCSD has a flexible replication scheme, plus a 288 TB xrootd cache which is in production.

Revision 1312018-03-15 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 
T2_US_Caltech 88000 8316 4158 2739 100 01/30/18
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
Changed:
<
<
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
>
>
T2_US_Nebraska 100245 9440 4720 3300 100 12/11/17
 
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18

Revision 1292018-03-09 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 40 to 40
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 88000 8316 4158 2739 100 01/30/18
Changed:
<
<
T2_US_Florida 137086 7998 7998 3862 100 02/05/18
>
>
T2_US_Florida 152512 8898 8898 3862 100 03/09/18
 
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18

Revision 1282018-02-07 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 39 to 39
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 88000 8316 4158 4350 100 01/30/18
>
>
T2_US_Caltech 88000 8316 4158 2739 100 01/30/18
 
T2_US_Florida 137086 7998 7998 3862 100 02/05/18
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18
Changed:
<
<
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17
>
>
T2_US_UCSD 122247 10912 5456 3328 80 11/07/17
 
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
Total HEP Sites 122768 12400   3600    
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
Deleted:
<
<
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
 
Changed:
<
<
(*) Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
>
>
Notes:
  • Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
  • Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.
  • Caltech has 4,350 TB raw disk space for hosting (2,175 TB replicated), plus a 264 TB xrootd cache which is in production, and a 300 TB Ceph development instance.
  • UCSD has a flexible replication scheme, plus a 288 TB xrootd cache which is in production.
 
<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->

Revision 1272018-02-06 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 40 to 40
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 88000 8316 4158 4350 100 01/30/18
Changed:
<
<
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
>
>
T2_US_Florida 137086 7998 7998 3862 100 02/05/18
 
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18

Revision 1262018-02-01 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 49 to 48
 
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
Total HEP Sites 122768 12400   3600    
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
Changed:
<
<
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
>
>
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.

(*) Purdue compute nodes adhere to a strict 5 year hardware retirement policy. Some Purdue storage nodes run outside their warranty period, 170 TB of the above 3900 TB is out of warranty.

<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
 

Opportunistic Computing Resources

Line: 62 to 65
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Changed:
<
<
T2_US_Purdue 21,164**   05/11/2016
>
>
T2_US_Purdue 29,660**   02/1/2018
 
T2_US_UCSD 6000***   12/03/2015
T2_US_Vanderbilt 800   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016
Line: 70 to 73
 (*) Disabled until 2017 HEP cluster upgrade.
(* * *) Comet at SDSC with allocation
Changed:
<
<
** Available via PBS standy queue with 4 hour walltime
>
>
** Available via PBS standby queue with 4 hour walltime
 

Discovery of Opportunistic Batch Slots with the Global Pool

Revision 1252018-01-30 - HarveyNewman

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 40 to 40
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 78220 7600 3800 3000 100 12/06/17
>
>
T2_US_Caltech 88000 8316 4158 4350 100 01/30/18
 
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17

Revision 1242018-01-26 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  We don't know what the 2018 hardware budget will be yet, but we in the University Facilities area have been thinking about priorities for this year's hardware deployment, and we would like to get feedback from each of the sites.
Changed:
<
<
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year. Given that most hardware will not be deployed until late in the calendar year, close to the start of the two-year shutdown, we do not see a need to massively increase resources at the sites at this time. We are also running more than 50% of the total CMS production activity in the U.S., which is problematic for the operations program when talking to funding agencies. Instead, we can focus on reinforcing site infrastructure to better support current operation and prepare for future needs.
>
>
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year. Given that most hardware will not be deployed until late in the calendar year, close to the start of the two-year shutdown, we do not see a need to massively increase resources at the sites at this time. We are also running more than 50% of the total CMS production activity in the U.S., which is problematic for the operations program when talking to funding agencies. Instead, we can focus on reinforcing site infrastructure to better support current operations and prepare for future needs.
  Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06), that we prioritize 2018 purchases as follows:

Revision 1232018-01-25 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 16 to 16
 
  • Take the opportunity to make any infrastructure upgrades rather than increase capacity, i.e. networking, cooling, etc.
Changed:
<
<
  • Spend any remaining funds on storage.
>
>
  • Spend any remaining funds on storage or processing, as you see fit or according to any pricing advantages you are facing. We recommend storage purchases over processing, which is already very well-provisioned.
  Please send us your comments or criticisms about this guidance. Sites which lease rather than own equipment may have quite different opinions or constraints. We'd like to hear them too.

Revision 1222018-01-22 - OliverGutsche

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  We don't know what the 2018 hardware budget will be yet, but we in the University Facilities area have been thinking about priorities for this year's hardware deployment, and we would like to get feedback from each of the sites.
Changed:
<
<
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year. Given that most hardware will not be deployed until late in the calendar year, close to the start of the two-year shutdown, we do not see a need to massively provision sites at this time. Instead, we can focus on reinforcing site infrastructure to better support current operation and prepare for future needs.
>
>
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year. Given that most hardware will not be deployed until late in the calendar year, close to the start of the two-year shutdown, we do not see a need to massively increase resources at the sites at this time. We are also running more than 50% of the total CMS production activity in the U.S., which is problematic for the operations program when talking to funding agencies. Instead, we can focus on reinforcing site infrastructure to better support current operation and prepare for future needs.
  Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06), that we prioritize 2018 purchases as follows:

Revision 1212018-01-22 - KenBloom

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  We don't know what the 2018 hardware budget will be yet, but we in the University Facilities area have been thinking about priorities for this year's hardware deployment, and we would like to get feedback from each of the sites.
Changed:
<
<
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year.
>
>
The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year. Given that most hardware will not be deployed until late in the calendar year, close to the start of the two-year shutdown, we do not see a need to massively provision sites at this time. Instead, we can focus on reinforcing site infrastructure to better support current operation and prepare for future needs.
  Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06), that we prioritize 2018 purchases as follows:

  • Maintain (but not increase) current processing capacity, with respect to any planned worker node retirements.
Changed:
<
<
  • Deploy the storage pledge plus a (recommended) buffer of +1 PB to enable U.S. physics analysis. The April 1, 2018 storage pledge to CMS is 2,500 TB and this increases to 2,800 TB for April 1, 2019, and so should be deployed by the end of 2018. Therefore, the storage deployment goal for April 1, 2018 is 3,500 TB and for the end of calendar 2018 is 3,800 TB.
>
>
  • Deploy the storage pledge plus a buffer of +1 PB to enable U.S. physics analysis. The April 1, 2018 storage pledge to CMS is 2,500 TB and this increases to 2,800 TB for April 1, 2019, and so should be deployed by the end of 2018. Therefore, the storage deployment goal for April 1, 2018 is 3,500 TB and for the end of calendar 2018 is 3,800 TB.
 
  • Take the opportunity to make any infrastructure upgrades rather than increase capacity, i.e. networking, cooling, etc.

Revision 1202018-01-22 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
  The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year.
Changed:
<
<
Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06) [1,2], that we prioritize 2018 purchases as follows:
>
>
Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06), that we prioritize 2018 purchases as follows:
 
Changed:
<
<
  • Maintain current processing capacity, with respect to any planned worker node retirements.
>
>
  • Maintain (but not increase) current processing capacity, with respect to any planned worker node retirements.

  • Deploy the storage pledge plus a (recommended) buffer of +1 PB to enable U.S. physics analysis. The April 1, 2018 storage pledge to CMS is 2,500 TB and this increases to 2,800 TB for April 1, 2019, and so should be deployed by the end of 2018. Therefore, the storage deployment goal for April 1, 2018 is 3,500 TB and for the end of calendar 2018 is 3,800 TB.
 
  • Take the opportunity to make any infrastructure upgrades rather than increase capacity, i.e. networking, cooling, etc.

  • Spend any remaining funds on storage.
Deleted:
<
<
The 2019 storage pledge per site is ~2,800 TB, and our continuing recommendation is to deploy at least 1 PB beyond the pledge in order to enable physics analysis here in the U.S., so a minimum of 3,800 TB by the end of this year.
 Please send us your comments or criticisms about this guidance. Sites which lease rather than own equipment may have quite different opinions or constraints. We'd like to hear them too.

2018 CMS Tier-2 resource request, per site:

Line: 33 to 33
 
  • 110,500 HS06
  • 3,300 TB
Changed:
<
<
or a ratio of 33.5 HS06/TB, somewhat more in favor of processing than the previous year.

Assuming 10% of storage and 5% of processing is retired each year, the steady-state replacement cost using the above purchasing power amounts is 330TB * $55/TB + 5,525 HS06 * $10/HS06 = $73.400.

>
>
or a ratio of 33.5 HS06/TB, somewhat more in favor of processing than the previous year. Assuming 10% of storage and 5% of processing is retired each year, the steady-state replacement cost using the above purchasing power amounts is 330TB * $55/TB + 5,525 HS06 * $10/HS06 = $73.400.
 

U.S. CMS Tier-2 Facilities Deployment Status

Revision 1192018-01-20 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Changed:
<
<

Purchasing Power Estimates

>
>

DRAFT Deployment Guidance for 2018

 
Changed:
<
<
Revised Estimates for 2017:
>
>
We don't know what the 2018 hardware budget will be yet, but we in the University Facilities area have been thinking about priorities for this year's hardware deployment, and we would like to get feedback from each of the sites.

The storage and processing pledges don't go up much in 2019, which is a shutdown year for the LHC, relative to 2018. We use the current calendar year's hardware budget to provision for the following year's pledge, which activates on April 1st each year.

Our consensus is that since we have enough capacity to cover the 2019 estimated processing pledge times three (250,000 HS06 in total for the seven U.S. Tier-2 sites, while the current total deployment is 773,650 HS06) [1,2], that we prioritize 2018 purchases as follows:

  • Maintain current processing capacity, with respect to any planned worker node retirements.

  • Take the opportunity to make any infrastructure upgrades rather than increase capacity, i.e. networking, cooling, etc.

  • Spend any remaining funds on storage.

The 2019 storage pledge per site is ~2,800 TB, and our continuing recommendation is to deploy at least 1 PB beyond the pledge in order to enable physics analysis here in the U.S., so a minimum of 3,800 TB by the end of this year.

Please send us your comments or criticisms about this guidance. Sites which lease rather than own equipment may have quite different opinions or constraints. We'd like to hear them too.

2018 CMS Tier-2 resource request, per site:

  2018 2019
CPU HS06 32,143 35,714
Storage TB 2,500 2,786

Purchasing Power Estimates Estimates for 2018:

 
  • $10/HS06 processing
  • $55/TB storage
Changed:
<
<
Nominal site in September 2017:
  • 98,000 HS06
  • 3,400 TB

or a ratio of 29 HS06/TB, roughly the same as last year.

>
>
Nominal site in January 2018:
  • 110,500 HS06
  • 3,300 TB
 
Changed:
<
<
The minimum storage deployment recommendation for April 2018 is 3,500 TB (1,000TB over the pledge of 2,500TB). The pledge to global CMS for processing is 32,150 HS06 for April 2018, an amount somewhat unrelated to the actual need here in the U.S.
>
>
or a ratio of 33.5 HS06/TB, somewhat more in favor of processing than the previous year.
 
Changed:
<
<
Assuming 10% of storage and 5% of processing is retired each year, the steady-state replacement cost using the above purchasing power amounts is 340TB * $55/TB + 4,900 HS06 * $10/HS06 = $67,700.
>
>
Assuming 10% of storage and 5% of processing is retired each year, the steady-state replacement cost using the above purchasing power amounts is 330TB * $55/TB + 5,525 HS06 * $10/HS06 = $73.400.
 

U.S. CMS Tier-2 Facilities Deployment Status

Revision 1182018-01-08 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 29 to 29
 
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17
Changed:
<
<
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
>
>
T2_US_Wisconsin 122768 12400 To be updated 3600 100 01/08/18
 
Total HEP Sites            
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->

Revision 1172018-01-08 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 28 to 27
 
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
Changed:
<
<
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
>
>
T2_US_Purdue 104938 6860 5160 3900 100 01/08/18
 
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 235777 22412 10196 7900    

Revision 1162017-12-12 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 26 to 26
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 78220 7600 3800 3000 100 12/06/17
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
Changed:
<
<
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
>
>
T2_US_MIT 107576 9176 9176 4000 100 12/12/17
 
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17

Revision 1152017-12-11 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 27 to 27
 
T2_US_Caltech 78220 7600 3800 3000 100 12/06/17
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
Changed:
<
<
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
>
>
T2_US_Nebraska 100815 8912 4456 3300 100 12/11/17
 
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17

Revision 1142017-12-08 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 24 to 24
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 77176 7504 3752 3000 100 12/06/17
>
>
T2_US_Caltech 78220 7600 3800 3000 100 12/06/17
 
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17

Revision 1132017-12-07 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 24 to 24
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 71817 7024 3512 3000 100 08/24/17
>
>
T2_US_Caltech 77176 7504 3752 3000 100 12/06/17
 
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17

Revision 1122017-11-08 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 47 to 47
 
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
T2_US_Purdue 21,164**   05/11/2016
Changed:
<
<
T2_US_UCSD 1680***   12/03/2015
>
>
T2_US_UCSD 6000***   12/03/2015
 
T2_US_Vanderbilt 800   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016
Total >23,900    

Revision 1112017-11-07 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 29 to 29
 
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
Changed:
<
<
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
>
>
T2_US_UCSD 122247 10912 5456 4900 80 11/07/17
 
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 113530 11500 4740 3000    
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17

Revision 1102017-09-29 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"

Purchasing Power Estimates

Changed:
<
<
Recent purchase at Wisconsin is approximately 37kHS06 with 1,280 TB of storage, with storage estimated at $50/TB, or $64,000, for a net price for racked processing of $262,000, or $7/HS06.
>
>
Revised Estimates for 2017:
  • $10/HS06 processing
  • $55/TB storage
 
Changed:
<
<
Estimates for 2017:
  • $6/HS06 processing
  • $40/TB storage
>
>
Nominal site in September 2017:
  • 98,000 HS06
  • 3,400 TB
 
Changed:
<
<
Nominal site in November 2016:
  • 70,000 HS06
  • 2,400 TB
>
>
or a ratio of 29 HS06/TB, roughly the same as last year.
 
Changed:
<
<
or a ratio of 29 HS06/TB.
>
>
The minimum storage deployment recommendation for April 2018 is 3,500 TB (1,000TB over the pledge of 2,500TB). The pledge to global CMS for processing is 32,150 HS06 for April 2018, an amount somewhat unrelated to the actual need here in the U.S.
 
Changed:
<
<
Our minimum deployment goals (CRSG recommendation) for 2017 is:
  • 30,357 HS06
  • 2,487 TB
>
>
Assuming 10% of storage and 5% of processing is retired each year, the steady-state replacement cost using the above purchasing power amounts is 340TB * $55/TB + 4,900 HS06 * $10/HS06 = $67,700.
 
Deleted:
<
<
Assuming 10% of storage and 5% of processing is retired each year, then the $200,000 proposed hardware budget for next year results in sites that would look like, if a site spends $x on processing:
CPU = (70,000 HS06 * 095) + ( x / $6 ) 
Storage = ( 2,400 TB * 0.9 ) + ( $200000 - x ) / $40 =?= 3,500 TB

If the deployment goal for April 2017 is the CRSG recommendation plus one PB, or 3,500 TB, then each nominal site will have to spend about $54,000 on storage next year, leaving about $146,000 to spend either on processing or storage. Spending it all on processing would result in a nominal site having about 91kHS06, or ~26 HS06/TB, somewhat (~10%) more skewed in favor of storage than a nominal site today.

Notes:

2016: 25.0 kHS06, 1.36 PB
2017: 26.8 kHS06, 1.90 PB (28.6 kHS06, 2.32 PB)
2018: 32.1 kHS06, 2.25 PB (33.9 kHS06, 2.79 PB)

2016: $275k
2017: $200k ($150k)
 

U.S. CMS Tier-2 Facilities Deployment Status

Revision 1092017-09-16 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 71817 7024 3512 3000 100 08/24/17
Changed:
<
<
T2_US_Florida 65815
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
6144 6144 2277 100 03/08/17
>
>
T2_US_Florida 137086 7998 7998 2277 100 09/15/17
 
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
Line: 51 to 51
 
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 113530 11500 4740 3000    
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
Changed:
<
<
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
>
>
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
<!-- 5.639 (AMD) : 10.71 (AMD HS06) = 13.97 (HPG2) : x to x =26.52 but per https://docs.google.com/spreadsheets/d/1VM-guNSYpYeJ0ghyO5K3otbyQIev_tLR8epVu4MHDvA/edit#gid=0 -->
<!-- 6144 * 10.71 era number = 65815 >
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
 

Opportunistic Computing Resources

Revision 1082017-08-25 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 65730 8304 4152 2355 100 07/26/17
>
>
T2_US_Caltech 71817 7024 3512 3000 100 08/24/17
 
T2_US_Florida 65815
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
6144 6144 2277 100 03/08/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17

Revision 1072017-08-09 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 46 to 46
 
T2_US_Florida 65815
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
6144 6144 2277 100 03/08/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
Changed:
<
<
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17
>
>
T2_US_Purdue 88063 5660 4560 3500 100 08/09/17
 
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 214081 20620 9300 7900    
Line: 64 to 64
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Changed:
<
<
T2_US_Purdue 28,092**   05/11/2016
>
>
T2_US_Purdue 21,164**   05/11/2016
 
T2_US_UCSD 1680***   12/03/2015
T2_US_Vanderbilt 800   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016

Revision 1062017-07-27 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
>
>
T2_US_Caltech 65730 8304 4152 2355 100 07/26/17
 
T2_US_Florida 65815
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
6144 6144 2277 100 03/08/17
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17

Revision 1052017-03-21 - AndrewMelo

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 50 to 50
 
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 214081 20620 9300 7900    
Changed:
<
<
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
>
>
T2_US_Vanderbilt unk 4396 2198 3200 100 03/21/17
 (*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.

Opportunistic Computing Resources

Line: 66 to 66
 
T2_US_Nebraska ~4,000   05/11/2016
T2_US_Purdue 28,092**   05/11/2016
T2_US_UCSD 1680***   12/03/2015
Changed:
<
<
T2_US_Vanderbilt 400   05/25/2016
>
>
T2_US_Vanderbilt 800   05/25/2016
 
T2_US_Wisconsin ~1,500   07/14/2016
Total >23,900    
(*) Disabled until 2017 HEP cluster upgrade.
Line: 104 to 104
 
T2_US_Nebraska 9557 -5840 3717 3000
T2_US_Purdue 16217 -6436 9781 9200
T2_US_UCSD 5329 -5256 73 0
Changed:
<
<
T2_US_Vanderbilt 2598 -2198 400 0
>
>
T2_US_Vanderbilt 5274 -4396 878 0
 
T2_US_Wisconsin 10573 -7860 2713 1500
Total 10573 -7860 2713 1500
Line: 121 to 121
 
T2_US_Nebraska Cobbler 2.6 and Puppet 4.8 (opensource)
T2_US_Purdue Foreman 1.2 and Puppet 2.7
T2_US_UCSD Foreman (Latest) and Puppet 3.7.5 (4.x)
Changed:
<
<
T2_US_Vanderbilt CFEngine 3.6, Puppet 3.2
>
>
T2_US_Vanderbilt CFEngine 3.5 (cluster + storage) Saltstack 2016.11.3 (storage)
 
T2_US_Wisconsin Puppet 3.7.3, ganeti (VM cluster)
Responsible: JamesLetts

Revision 1042017-03-08 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
Changed:
<
<
T2_US_Florida 81826
<!-- 4774 * 17.14 (See spreadsheet) instead of 4774 * 10.94 -->
4774 4774 2277 100 03/07/17
>
>
T2_US_Florida 65815
<!-- Opteron 6378 : 6144 * 96 * 64 * 10.71 -->
6144 6144 2277 100 03/08/17
 
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17

Revision 1032017-03-07 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
Changed:
<
<
T2_US_Florida 81158
<!-- 4774 * 17 instead of 4774 * 10.94 -->
4774 4774 2277 100 11/15/16
>
>
T2_US_Florida 81826
<!-- 4774 * 17.14 (See spreadsheet) instead of 4774 * 10.94 -->
4774 4774 2277 100 03/07/17
 
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17

Revision 1022017-03-06 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
Changed:
<
<
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
>
>
T2_US_Florida 81158
<!-- 4774 * 17 instead of 4774 * 10.94 -->
4774 4774 2277 100 11/15/16
 
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17

Revision 1012017-03-03 - GarhanAttebury

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 45 to 45
 
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
Changed:
<
<
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
>
>
T2_US_Nebraska 83265 7664 4288 3300 100 03/03/17
 
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17

Revision 1002017-01-23 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 44 to 44
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
Changed:
<
<
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
>
>
T2_US_MIT 93000 8200 8200 4000 100 01/23/17
 
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17

Revision 992017-01-17 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 46 to 46
 
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
Changed:
<
<
T2_US_Purdue 132283 8040 7320 3500 100 01/09/17
>
>
T2_US_Purdue 141384 8800 7700 3500 100 01/17/17
 
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 214081 20620 9300 7900    

Revision 982017-01-12 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 83035 7664 4756 2820 100 01/12/17
>
>
T2_US_Caltech 72686 7748 4760 2820 100 01/12/17
 
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16

Revision 972017-01-12 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 42 to 42
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
>
>
T2_US_Caltech 83035 7664 4756 2820 100 01/12/17
 
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
Line: 60 to 60
 Many Tier-2 sites allow (often) seamless usage for CMS of opportunistic computing resources through the existing Tier-2 infrastructure. List here please the opportunistic computing resources (CPU and/or storage) available at each site over and above the values in the table above.

Site Batch Slots Space for hosting (TB) Last update
Changed:
<
<
T2_US_Caltech 200*   02/19/2015
>
>
T2_US_Caltech 0*   01/12/2017
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Line: 69 to 69
 
T2_US_Vanderbilt 400   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016
Total >23,900    
Changed:
<
<
(*) Increasing to 700 later in 2015.
>
>
(*) Disabled until 2017 HEP cluster upgrade.
 (* * *) Comet at SDSC with allocation

** Available via PBS standy queue with 4 hour walltime

Revision 962017-01-11 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 47 to 47
 
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
T2_US_Purdue 132283 8040 7320 3500 100 01/09/17
Changed:
<
<
T2_US_UCSD 97160 7360 3680 4900 80 01/08/17
>
>
T2_US_UCSD 100551 9120 4560 4900 80 01/10/17
 
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
Total HEP Sites 113530 11500 4740 3000    
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16

Revision 952017-01-09 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 48 to 48
 
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
T2_US_Purdue 132283 8040 7320 3500 100 01/09/17
T2_US_UCSD 97160 7360 3680 4900 80 01/08/17
Changed:
<
<
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
>
>
T2_US_Wisconsin 113530 11500 4740 3000 100 01/09/17
 
Total HEP Sites            
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.

Revision 942017-01-09 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 46 to 46
 
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
Changed:
<
<
T2_US_Purdue 111098 8744 8024 3500 100 12/22/16
>
>
T2_US_Purdue 132283 8040 7320 3500 100 01/09/17
 
T2_US_UCSD 97160 7360 3680 4900 80 01/08/17
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Total HEP Sites 182672 16160 8420 7340    

Revision 932017-01-09 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 47 to 47
 
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
T2_US_Purdue 111098 8744 8024 3500 100 12/22/16
Changed:
<
<
T2_US_UCSD 97,160 7360 3680 4900 80 1/8/17
>
>
T2_US_UCSD 97160 7360 3680 4900 80 01/08/17
 
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Total HEP Sites 85512 8800 4740 2440    
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16

Revision 922017-01-09 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 47 to 47
 
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
T2_US_Purdue 111098 8744 8024 3500 100 12/22/16
Changed:
<
<
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
>
>
T2_US_UCSD 97,160 7360 3680 4900 80 1/8/17
 
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Total HEP Sites 85512 8800 4740 2440    
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
Line: 66 to 66
 
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
T2_US_Purdue 28,092**   05/11/2016
Changed:
<
<
T2_US_UCSD 760***   12/03/2015
>
>
T2_US_UCSD 1680***   12/03/2015
 
T2_US_Vanderbilt 400   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016
Total >23,900    
(*) Increasing to 700 later in 2015.
Changed:
<
<
(* * *) Comet at SDSC, special allocation
>
>
(* * *) Comet at SDSC with allocation
  ** Available via PBS standy queue with 4 hour walltime
Line: 121 to 121
 
T2_US_MIT  
T2_US_Nebraska Cobbler 2.6 and Puppet 4.8 (opensource)
T2_US_Purdue Foreman 1.2 and Puppet 2.7
Changed:
<
<
T2_US_UCSD Foreman 1.7(1.8) and Puppet 3.7.5
>
>
T2_US_UCSD Foreman (Latest) and Puppet 3.7.5 (4.x)
 
T2_US_Vanderbilt CFEngine 3.6, Puppet 3.2
T2_US_Wisconsin Puppet 3.7.3, ganeti (VM cluster)
Responsible: JamesLetts

Revision 912016-12-22 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 21 to 21
 
  • 2,487 TB

Assuming 10% of storage and 5% of processing is retired each year, then the $200,000 proposed hardware budget for next year results in sites that would look like, if a site spends $x on processing:

Deleted:
<
<
 
CPU = (70,000 HS06 * 095) + ( x / $6 ) 
Changed:
<
<
Storage = ( 2,400 TB * 0.9 ) + ( $200000 - x ) / $40 ? 3,500 TB
>
>
Storage = ( 2,400 TB * 0.9 ) + ( $200000 - x ) / $40 ? 3,500 TB
  If the deployment goal for April 2017 is the CRSG recommendation plus one PB, or 3,500 TB, then each nominal site will have to spend about $54,000 on storage next year, leaving about $146,000 to spend either on processing or storage. Spending it all on processing would result in a nominal site having about 91kHS06, or ~26 HS06/TB, somewhat (~10%) more skewed in favor of storage than a nominal site today.
Line: 36 to 35
 2018: 32.1 kHS06, 2.25 PB (33.9 kHS06, 2.79 PB)

2016: $275k

Changed:
<
<
2017: $200k ($150k)
>
>
2017: $200k ($150k)
 

U.S. CMS Tier-2 Facilities Deployment Status

Line: 49 to 46
 
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
Changed:
<
<
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
>
>
T2_US_Purdue 111098 8744 8024 3500 100 12/22/16
 
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Total HEP Sites 155861 15976 8328 4440    
Line: 69 to 65
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Changed:
<
<
T2_US_Purdue

~4,616

27,692**

  05/11/2016
>
>
T2_US_Purdue 28,092**   05/11/2016
 
T2_US_UCSD 760***   12/03/2015
T2_US_Vanderbilt 400   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016

Revision 902016-12-13 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 46 to 46
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
Changed:
<
<
T2_US_Florida 52228last 4774last* 4774* 2277 100 11/15/16last
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/2016
>
>
T2_US_Florida 52228 4774 4774 2277 100 11/15/16
T2_US_MIT 59000 6500 6500 2500 100 03/14/16
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/16
 
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Changed:
<
<
Total            
>
>
Total HEP Sites            
 
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16

Revision 892016-11-30 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 48 to 48
 
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
T2_US_Florida 52228last 4774last* 4774* 2277 100 11/15/16last
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
Changed:
<
<
T2_US_Nebraska 79930 7184 3592 3300 100 11/15/2016
>
>
T2_US_Nebraska 78318 6944 3472 3300 100 11/30/2016
 
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16

Revision 882016-11-15 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 48 to 48
 
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
T2_US_Florida 52228last 4774last* 4774* 2277 100 11/15/16last
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
Changed:
<
<
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
>
>
T2_US_Nebraska 79930 7184 3592 3300 100 11/15/2016
 
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16

Revision 872016-11-15 - GarhanAttebury

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 123 to 123
 
T2_US_Caltech Foreman 1.2 and Puppet 3.8
T2_US_Florida Florida HiperGator SIS system (Image)
T2_US_MIT  
Changed:
<
<
T2_US_Nebraska Cobbler 2.6 series and Puppet 4.2.2 (opensource)
>
>
T2_US_Nebraska Cobbler 2.6 and Puppet 4.8 (opensource)
 
T2_US_Purdue Foreman 1.2 and Puppet 2.7
T2_US_UCSD Foreman 1.7(1.8) and Puppet 3.7.5
T2_US_Vanderbilt CFEngine 3.6, Puppet 3.2

Revision 862016-11-15 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 46 to 46
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
Changed:
<
<
T2_US_Florida 49285last 4505last* 4505last* 2277 100 31/08/16last
>
>
T2_US_Florida 52228last 4774last* 4774* 2277 100 11/15/16last
 
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16

Revision 852016-11-10 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  Recent purchase at Wisconsin is approximately 37kHS06 with 1,280 TB of storage, with storage estimated at $50/TB, or $64,000, for a net price for racked processing of $262,000, or $7/HS06.
Changed:
<
<
* $50/TB storage
>
>
Estimates for 2017:
  • $6/HS06 processing
  • $40/TB storage
 
Changed:
<
<
* $7/HS06 processing
>
>
Nominal site in November 2016:
  • 70,000 HS06
  • 2,400 TB
 
Changed:
<
<
In 2016, nominal sites will have 2,400 TB of storage and 60,000 HS06 of processing.
>
>
or a ratio of 29 HS06/TB.
 
Changed:
<
<
If the deployment goal for 2017 is pledge plus one PB, or 2,900 TB, then each nominal site will have to purchase ~500 TB of new storage, plus perhaps another 100 TB to cover retirements, or 600 TB, total cost $30,000.
>
>
Our minimum deployment goals (CRSG recommendation) for 2017 is:
  • 30,357 HS06
  • 2,487 TB
 
Changed:
<
<
If the deployment goal for 2017 is revised higher to 3,500 TB, this would be a purchase of 1,200 TB per site or $60,000.
>
>
Assuming 10% of storage and 5% of processing is retired each year, then the $200,000 proposed hardware budget for next year results in sites that would look like, if a site spends $x on processing:
 
Changed:
<
<
Both goals are well within the hardware budget of $200,000 per site.
>
>
CPU = (70,000 HS06 * 095) + ( x / $6 ) 
Storage = ( 2,400 TB * 0.9 ) + ( $200000 - x ) / $40 =?= 3,500 TB
 
Changed:
<
<
Remaining funds used for processing would leave $170,000 or $140,000 per site, under the two scenarios, representing approximately 24,000 or 20,000 HS06, respectively. A nominal site would then have either 80,000 HS06/3500 TB or 84,000 HS06/2,900 TB, being ratios of ~23 or ~29 HS06/TB, somewhat more skewed in favor of storage than a nominal site today (~30 HS06/TB), but still well within the ability to meet the processing pledge to global CMS.
>
>
If the deployment goal for April 2017 is the CRSG recommendation plus one PB, or 3,500 TB, then each nominal site will have to spend about $54,000 on storage next year, leaving about $146,000 to spend either on processing or storage. Spending it all on processing would result in a nominal site having about 91kHS06, or ~26 HS06/TB, somewhat (~10%) more skewed in favor of storage than a nominal site today.
  Notes:

Revision 832016-11-02 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 43 to 43
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
Deleted:
<
<
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
 
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Total 85512 8800 4740 2440    
Added:
>
>
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
 (*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.

Opportunistic Computing Resources

Revision 812016-09-30 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 21 to 21
 Remaining funds used for processing would leave $170,000 or $140,000 per site, under the two scenarios, representing approximately 24,000 or 20,000 HS06, respectively. A nominal site would then have either 80,000 HS06/3500 TB or 84,000 HS06/2,900 TB, being ratios of ~23 or ~29 HS06/TB, somewhat more skewed in favor of storage than a nominal site today (~30 HS06/TB), but still well within the ability to meet the processing pledge to global CMS.
Added:
>
>
Notes:
2016: 25.0 kHS06, 1.36 PB
2017: 26.8 kHS06, 1.90 PB (28.6 kHS06, 2.32 PB)
2018: 32.1 kHS06, 2.25 PB (33.9 kHS06, 2.79 PB)

2016: $275k
2017: $200k ($150k)
 

U.S. CMS Tier-2 Facilities Deployment Status

Revision 802016-09-23 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 49 to 49
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Changed:
<
<
T2_US_Purdue

~4,616

20,800**

  05/11/2016
>
>
T2_US_Purdue

~4,616

27,692**

  05/11/2016
 
T2_US_UCSD 760***   12/03/2015
T2_US_Vanderbilt 400   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016

Revision 792016-08-31 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 28 to 28
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
Changed:
<
<
T2_US_Florida 46769 4275* 4275* 2277 100 04/05/16
>
>
T2_US_Florida 49285last 4505last* 4505last* 2277 100 31/08/16last
 
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16

Revision 782016-08-25 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Added:
>
>

Purchasing Power Estimates

Recent purchase at Wisconsin is approximately 37kHS06 with 1,280 TB of storage, with storage estimated at $50/TB, or $64,000, for a net price for racked processing of $262,000, or $7/HS06.

* $50/TB storage

* $7/HS06 processing

In 2016, nominal sites will have 2,400 TB of storage and 60,000 HS06 of processing.

If the deployment goal for 2017 is pledge plus one PB, or 2,900 TB, then each nominal site will have to purchase ~500 TB of new storage, plus perhaps another 100 TB to cover retirements, or 600 TB, total cost $30,000.

If the deployment goal for 2017 is revised higher to 3,500 TB, this would be a purchase of 1,200 TB per site or $60,000.

Both goals are well within the hardware budget of $200,000 per site.

Remaining funds used for processing would leave $170,000 or $140,000 per site, under the two scenarios, representing approximately 24,000 or 20,000 HS06, respectively. A nominal site would then have either 80,000 HS06/3500 TB or 84,000 HS06/2,900 TB, being ratios of ~23 or ~29 HS06/TB, somewhat more skewed in favor of storage than a nominal site today (~30 HS06/TB), but still well within the ability to meet the processing pledge to global CMS.

 

U.S. CMS Tier-2 Facilities Deployment Status

This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Revision 772016-08-23 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 11 to 11
 
T2_US_Florida 46769 4275* 4275* 2277 100 04/05/16
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
Changed:
<
<
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
>
>
T2_US_Purdue 104606 6552 6552 2489 100 08/23/16
 
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
Line: 32 to 29
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
Changed:
<
<
T2_US_Purdue ~4,616   05/11/2016
>
>
T2_US_Purdue

~4,616

20,800**

  05/11/2016
 
T2_US_UCSD 760***   12/03/2015
T2_US_Vanderbilt 400   05/25/2016
T2_US_Wisconsin ~1,500   07/14/2016
Line: 41 to 37
 (*) Increasing to 700 later in 2015.
(* * *) Comet at SDSC, special allocation
Added:
>
>
** Available via PBS standy queue with 4 hour walltime
 

Discovery of Opportunistic Batch Slots with the Global Pool

Between March 11-27 2015, the machine name and number of CPUs of any worker node where a glidein was run in the glideinWMS Global Pool was recorded and analyzed to find the maximum number of uniquely identifiable batch slots at any given site. The results for the U.S. Tier-2 sites are given in the table below, with an overall global summary. Generally, opportunistic machines at Purdue and Florida were easily accessible.

Revision 762016-07-14 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 14 to 14
 
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
Changed:
<
<
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
>
>
T2_US_Wisconsin 85512 8800 4740 2440 100 07/14/16
 
Total            

(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.

Line: 35 to 35
 
T2_US_Purdue ~4,616   05/11/2016
T2_US_UCSD 760***   12/03/2015
T2_US_Vanderbilt 400   05/25/2016
Changed:
<
<
T2_US_Wisconsin ~1,500   02/28/2015
>
>
T2_US_Wisconsin ~1,500   07/14/2016
 
Total >23,900    

(*) Increasing to 700 later in 2015.

Revision 742016-05-31 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 7 to 7
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 66831 7318   2400 100 01/12/16
>
>
T2_US_Caltech 58163 6926 3792 2400 100 01/12/16
 
T2_US_Florida 46769 4275* 4275* 2277 100 04/05/16
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15

Revision 732016-05-25 - CharlesMaguire

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 13 to 13
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
Added:
>
>
T2_US_Vanderbilt 49420 2198 2198 3200 10 05/25/16
 
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
Total 75089 7864 4348 2250    
Line: 33 to 34
 
T2_US_Nebraska ~4,000   05/11/2016
T2_US_Purdue ~4,616   05/11/2016
T2_US_UCSD 760***   12/03/2015
Added:
>
>
T2_US_Vanderbilt 400   05/25/2016
 
T2_US_Wisconsin ~1,500   02/28/2015
Total >23,900    
Line: 50 to 52
 
T2_US_Nebraska 9,688
T2_US_Purdue 22,458
T2_US_UCSD 4,643
Added:
>
>
T2_US_Vanderbilt 2,598
 
T2_US_Wisconsin 7,252
All Sites 205,080
All Tier-1 Sites 43,229
Line: 72 to 75
 
T2_US_Nebraska 9557 -5840 3717 3000
T2_US_Purdue 16217 -6436 9781 9200
T2_US_UCSD 5329 -5256 73 0
Changed:
<
<
T2_US_Wisconsin 10573 -7860 2713 1500
>
>
T2_US_Vanderbilt 2598 -2198 400 0
T2_US_Wisconsin 10573 -7860 2713 1500
 
Total        

Cluster Management

Revision 722016-05-20 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 7 to 7
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 66831 7318   2000 100 01/12/16
>
>
T2_US_Caltech 66831 7318   2400 100 01/12/16
 
T2_US_Florida 46769 4275* 4275* 2277 100 04/05/16
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15

Revision 712016-05-18 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 32 to 32
 
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~4,000   05/11/2016
T2_US_Purdue ~4,616   05/11/2016
Changed:
<
<
T2_US_UCSD ~4,000***   12/03/2015
>
>
T2_US_UCSD 760***   12/03/2015
 
T2_US_Wisconsin ~1,500   02/28/2015
Total >23,900    

(*) Increasing to 700 later in 2015.

Changed:
<
<
(* * *) Gordon and Comet at SDSC, special allocation registered as T3_US_SDSC for Gordon
>
>
(* * *) Comet at SDSC, special allocation
 

Discovery of Opportunistic Batch Slots with the Global Pool

Revision 702016-05-11 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 30 to 30
 
T2_US_Caltech 200*   02/19/2015
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
Changed:
<
<
T2_US_Nebraska ~3,000   02/19/2015
>
>
T2_US_Nebraska ~4,000   05/11/2016
 
T2_US_Purdue ~4,616   05/11/2016
T2_US_UCSD ~4,000***   12/03/2015
T2_US_Wisconsin ~1,500   02/28/2015

Revision 692016-05-11 - ErikGough

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 31 to 31
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 1500 900 02/19/2015
T2_US_Nebraska ~3,000   02/19/2015
Changed:
<
<
T2_US_Purdue ~4,256**   11/25/2015
>
>
T2_US_Purdue ~4,616   05/11/2016
 
T2_US_UCSD ~4,000***   12/03/2015
T2_US_Wisconsin ~1,500   02/28/2015
Total >23,900    

(*) Increasing to 700 later in 2015.

Deleted:
<
<
(**) Maximum available, but given that the clusters are typically busy only ~4,000 extra batch slots available to CMS.
 (* * *) Gordon and Comet at SDSC, special allocation registered as T3_US_SDSC for Gordon

Discovery of Opportunistic Batch Slots with the Global Pool

Revision 682016-04-05 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 66831 7318   2000 100 01/12/16
Changed:
<
<
T2_US_Florida 37971 4275* 4275* 2277 100 03/16/16
>
>
T2_US_Florida 46769 4275* 4275* 2277 100 04/05/16
 
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16

Revision 672016-03-16 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 66831 7318   2000 100 01/12/16
Changed:
<
<
T2_US_Florida 29311 3300* 3300* 2277 100 03/09/16
>
>
T2_US_Florida 37971 4275* 4275* 2277 100 03/16/16
 
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
Line: 16 to 16
 
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
Total 75089 7864 4348 2250    
Changed:
<
<
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores.
>
>
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores, but very unlikely to get the extention.
 

Revision 662016-03-14 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 9 to 9
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 66831 7318   2000 100 01/12/16
T2_US_Florida 29311 3300* 3300* 2277 100 03/09/16
Changed:
<
<
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
>
>
T2_US_MIT 59000 6500 6500 2500 100 14/03/16
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15

Revision 652016-03-09 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 66831 7318   2000 100 01/12/16
Changed:
<
<
T2_US_Florida 45138 2950* 2950* 2277 100 02/15/16
>
>
T2_US_Florida 29311 3300* 3300* 2277 100 03/09/16
 
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16

Revision 642016-02-15 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 66831 7318   2000 100 01/12/16
Changed:
<
<
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
>
>
T2_US_Florida 45138 2950* 2950* 2277 100 02/15/16
 
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
Line: 16 to 16
 
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
Total 75089 7864 4348 2250    
Added:
>
>
(*) Florida Slots are reduced due to Florida HPC 5 year hardware retirement policy. We are negotiating with HPC to extend the retired cores.
 

Opportunistic Computing Resources

Revision 632016-01-26 - ManojJha

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 11 to 11
 
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
Changed:
<
<
T2_US_Purdue 104247 6532 6532 2489 100 01/11/16
>
>
T2_US_Purdue 104606 6552 6552 2489 100 01/26/16
 
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
Total 145438 15040 7936 4250    

Revision 622016-01-13 - ThomasWayneHendricks

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 7 to 7
 This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.

Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
Changed:
<
<
T2_US_Caltech 53417 5780   2000 100 03/18/15
>
>
T2_US_Caltech 66831 7318   2000 100 01/12/16
 
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
Line: 80 to 80
 
Site Tools
T2_BR_SPRACE  
T2_BR_UERJ Kickstart and Ansible(Red Hat's resources)
Changed:
<
<
T2_US_Caltech  
>
>
T2_US_Caltech Foreman 1.2 and Puppet 3.8
 
T2_US_Florida Florida HiperGator SIS system (Image)
T2_US_MIT  
T2_US_Nebraska Cobbler 2.6 series and Puppet 4.2.2 (opensource)

Revision 612016-01-11 - ManojJha

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 11 to 11
 
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
Changed:
<
<
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
>
>
T2_US_Purdue 104247 6532 6532 2489 100 01/11/16
 
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
Total 145438 15040 7936 4250    
Line: 34 to 34
 
Total >23,900    

(*) Increasing to 700 later in 2015.

Changed:
<
<
(**) Maximum available, but given that the clusters are typically busy only ~6,000 extra batch slots available to CMS.
>
>
(**) Maximum available, but given that the clusters are typically busy only ~4,000 extra batch slots available to CMS.
 (* * *) Gordon and Comet at SDSC, special allocation registered as T3_US_SDSC for Gordon

Discovery of Opportunistic Batch Slots with the Global Pool

Revision 602015-12-18 - AjitMohapatra

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 13 to 13
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
Changed:
<
<
T2_US_Wisconsin 78700 7860 4348 2250 100 12/04/15
>
>
T2_US_Wisconsin 75089 7864 4348 2250 100 12/18/15
 
Total            

Revision 592015-12-08 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 9 to 9
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
Changed:
<
<
T2_US_MIT 59000 6500 6500 2500 10 02/17/15
>
>
T2_US_MIT 59000 6500 6500 2500 10 12/08/15
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15

Revision 582015-12-08 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 9 to 9
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
Changed:
<
<
T2_US_MIT 37430 5200   2000 10 02/17/15
>
>
T2_US_MIT 59000 6500 6500 2500 10 02/17/15
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15

Revision 572015-12-06 - MaximGoncharov

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 26 to 26
 
Site Batch Slots Space for hosting (TB) Last update
T2_US_Caltech 200*   02/19/2015
T2_US_Florida ~4,000   02/19/2015
Changed:
<
<
T2_US_MIT 0 400 02/19/2015
>
>
T2_US_MIT 1500 900 02/19/2015
 
T2_US_Nebraska ~3,000   02/19/2015
T2_US_Purdue ~4,256**   11/25/2015
T2_US_UCSD ~4,000***   12/03/2015

Revision 562015-12-04 - CarlVuosalo

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 13 to 13
 
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
Changed:
<
<
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
>
>
T2_US_Wisconsin 78700 7860 4348 2250 100 12/04/15
 
Total            

Revision 552015-12-03 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 12 to 12
 
T2_US_MIT 37430 5200   2000 10 02/17/15
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
Changed:
<
<
T2_US_UCSD 49229 7176 3588 2000 80 12/03/15
>
>
T2_US_UCSD 70349 7176 3588 2000 80 12/03/15
 
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
Total 78700 7860   2250    
Line: 29 to 29
 
T2_US_MIT 0 400 02/19/2015
T2_US_Nebraska ~3,000   02/19/2015
T2_US_Purdue ~4,256**   11/25/2015
Changed:
<
<
T2_US_UCSD ~3,000***   02/19/2015
>
>
T2_US_UCSD ~4,000***   12/03/2015
 
T2_US_Wisconsin ~1,500   02/28/2015
Total >23,900    

(*) Increasing to 700 later in 2015.
(**) Maximum available, but given that the clusters are typically busy only ~6,000 extra batch slots available to CMS.

Changed:
<
<
(* * *) Gordon at SDSC, special allocation registered as T3_US_SDSC
>
>
(* * *) Gordon and Comet at SDSC, special allocation registered as T3_US_SDSC for Gordon
 

Discovery of Opportunistic Batch Slots with the Global Pool

Revision 542015-12-03 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 10 to 10
 
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
T2_US_MIT 37430 5200   2000 10 02/17/15
Changed:
<
<
T2_US_Nebraska 65650 5840 2920 2200 100 11/03/15
T2_US_Purdue 96785 6636 6636 2150 100 11/03/15
>
>
T2_US_Nebraska 65650 5840 2920 2200 100 12/03/15
T2_US_Purdue 96785 6636 6636 2150 100 12/03/15
 
T2_US_UCSD 49229 7176 3588 2000 80 12/03/15
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
Total 127929 15036 3588 4250    

Revision 532015-12-03 - TerrenceMartin1

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 12 to 12
 
T2_US_MIT 37430 5200   2000 10 02/17/15
T2_US_Nebraska 65650 5840 2920 2200 100 11/03/15
T2_US_Purdue 96785 6636 6636 2150 100 11/03/15
Changed:
<
<
T2_US_UCSD 49229 5256   2000 80 02/18/15
>
>
T2_US_UCSD 49229 7176 3588 2000 80 12/03/15
 
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
Total 78700 7860   2250    

Revision 522015-12-03 - CarlLundstedt

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 10 to 10
 
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
T2_US_MIT 37430 5200   2000 10 02/17/15
Changed:
<
<
T2_US_Nebraska 65650 5840   2200 100 11/03/15
>
>
T2_US_Nebraska 65650 5840 2920 2200 100 11/03/15
 
T2_US_Purdue 96785 6636 6636 2150 100 11/03/15
T2_US_UCSD 49229 5256   2000 80 02/18/15
T2_US_Wisconsin 78700 7860   2250 100 11/03/15

Revision 512015-12-03 - BockjooKim

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 8 to 8
 
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 53417 5780   2000 100 03/18/15
Changed:
<
<
T2_US_Florida 45138 4126   2277 100 11/03/15
>
>
T2_US_Florida 45138 4126 4126 2277 100 12/03/15
 
T2_US_MIT 37430 5200   2000 10 02/17/15
T2_US_Nebraska 65650 5840   2200 100 11/03/15
T2_US_Purdue 96785 6636 6636 2150 100 11/03/15

Revision 502015-12-03 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.
Changed:
<
<
Site CPU (HS06) Batch Slots Hyperthreading? Space for hosting (TB) WAN (Gb/s) Last update
>
>
Site CPU (HS06) Batch Slots Physical Cores Space for hosting (TB) WAN (Gb/s) Last update
 
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126   2277 100 11/03/15
T2_US_MIT 37430 5200   2000 10 02/17/15
T2_US_Nebraska 65650 5840   2200 100 11/03/15
Changed:
<
<
T2_US_Purdue 96785 6636 No 2150 100 11/03/15
>
>
T2_US_Purdue 96785 6636 6636 2150 100 11/03/15
 
T2_US_UCSD 49229 5256   2000 80 02/18/15
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
Changed:
<
<
Total            
>
>
Total            
 

Opportunistic Computing Resources

Revision 492015-12-03 - JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 6 to 6
  This table is used to track deployment status at the U.S. CMS Tier-2 sites. Sites should update the table on the page as they deploy new equipment. This page reflects the "true" deployments paid for from U.S. CMS Tier-2 funds, as distinct from what we report to CMS as what we've pledged.
Changed:
<
<
Site CPU (HS06) Batch Slots Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 53417 5780 2000 100 03/18/15
T2_US_Florida 45138 4126 2277 100 11/03/15
T2_US_MIT 37430 5200 2000 10 02/17/15
T2_US_Nebraska 65650 5840 2200 100 11/03/15
T2_US_Purdue 96785 6636 2150 100 11/03/15
T2_US_UCSD 49229 5256 2000 80 02/18/15
T2_US_Wisconsin 78700 7860 2250 100 11/03/15
Total 426349 40698 14877    
>
>
Site CPU (HS06) Batch Slots Hyperthreading? Space for hosting (TB) WAN (Gb/s) Last update
T2_US_Caltech 53417 5780   2000 100 03/18/15
T2_US_Florida 45138 4126   2277 100 11/03/15
T2_US_MIT 37430 5200   2000 10 02/17/15
T2_US_Nebraska 65650 5840   2200 100 11/03/15
T2_US_Purdue 96785 6636 No 2150 100 11/03/15
T2_US_UCSD 49229 5256   2000 80 02/18/15
T2_US_Wisconsin 78700 7860   2250 100 11/03/15
Total 426349 40698   14877    
 

Opportunistic Computing Resources

Revision 482015-11-25 - ManojJha

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 28 to 28
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 0 400 02/19/2015
T2_US_Nebraska ~3,000   02/19/2015
Changed:
<
<
T2_US_Purdue ~4,968**   03/02/2015
>
>
T2_US_Purdue ~4,256**   11/25/2015
 
T2_US_UCSD ~3,000***   02/19/2015
T2_US_Wisconsin ~1,500   02/28/2015
Total >23,900    

Revision 472015-11-03 - ManojJha

Line: 1 to 1
 
META TOPICPARENT name="TWiki.WebPreferences"
Line: 11 to 11
 
T2_US_Florida 45138 4126 2277 100 11/03/15
T2_US_MIT 37430 5200 2000 10 02/17/15
T2_US_Nebraska 65650 5840 2200 100 11/03/15
Changed:
<
<
T2_US_Purdue 93196 6436 2150 100 10/29/15
>
>
T2_US_Purdue 96785 6636 2150 100 11/03/15
 
T2_US_UCSD 49229 5256 2000 80 02/18/15
T2_US_Wisconsin 78700 7860 2250 100 11/03/15
Total 127929 13116 4250    
Line: 28 to 28
 
T2_US_Florida ~4,000   02/19/2015
T2_US_MIT 0 400 02/19/2015
T2_US_Nebraska ~3,000   02/19/2015
Changed:
<
<
T2_US_Purdue ~9,200**   03/02/2015
>
>
T2_US_Purdue ~4,968**   03/02/2015
 
T2_US_UCSD ~3,000***   02/19/2015
T2_US_Wisconsin