Summary of the Hardware Purchasing Process for the U.S. University Facilities

Sites generally go through a competitive bidding process with at least three vendors for the equipment they wish to purchase. Even though our purchase model is "build to cost", the sites are expected to get the best price for the equipment, but are also given leeway in choosing vendors of high quality or reputation. When the sites have chosen a vendor to fulfill their order, they send a purchase quote to the U.S. University Facilities leads for review. The review process consists of checking that the proposed equipment purchase fulfills program needs and goals for deployment objectives and that the price is competitive with similar purchases by other sites. Generally we observe that pricing is improving over time for both storage and processing. Occasionally proposed purchases are rejected for lack of detail and sent back to the sites for clarification. All proposed purchases are archived, including the dates approval was requested and granted. When subsequent purchase invoices are received from the sites for review before payment, the invoices are compared to the quotes and any purchase orders received by the University Facilities leads.

Tier-2 NSF Invoices for 2018

Site Date of Invoice Amount Link/Comment
T2_US_MIT. 10/10/18 128581 MIT_FY18_90302976.pdf
T2_US_Wisconsin 12/04/18 158130 Wisconsin_FY18_20.pdf
Total   286711  

Tier-2 Hardware Purchases in 2018

Site Date Requested Date Approved Amount Type Details
T2_US_Caltech 04/10/18   28500.00 Storage AH8Q7141-A.pdf 720TB raw, $39.58/TB raw.
T2_US_Florida 05/30/18 05/30/18 720.00 Storage e-mail only
T2_US_Purdue 07/05/18 07/05/18 146640.00 CPU, Storage E-mail Quote $112k of compute @ $9.22/HS06 (864 slots w/ 4 GB ram/slot) and $35k on storage @ $48/TB for a total of 720 TB raw
T2_US_MIT 07/26/18 07/31/18 128,580.75 CPU Quote_US_PC_SC_3000027191643.1_2018-07-25.pdf ~$8/HS06 (est.) w/ 2GB/thread
T2_US_Caltech 07/30/18 07/31/18 62,804.16 CPU Quote2CRSITier2Gold6138Systems2U4Nodes_AlainWilmouth073018.pdf $4.36/HS06, 2.4GB/thread - CANCELLED BY VENDOR
T2_US_Caltech 08/07/18 08/0918 90,926.00 CPU Quote2CRSITier2Gold6138andHigherSystems2U4Nodes_AlainWilmouth080718.pdf $6/HS06, 2.4GB/T
T2_US_Florida 09/11/18 09/11/18 813.98 errata 2 replacement mother boards and one hard drive
T2_US_Wisconsin 10/02/18 10/08/18 158130.00 CPU, Storage 816 cores (768+48), and 1920TBs storage HS06/slot=10.2
T2_US_Nebraska 10/05/18 10/08/15 149388.18 CPU, Storage Quote_US_PC_SC_3000029588584.1_2018-10-05.pdf $9.18/HS06,
T2_US_Florida 11/13/18 11/13/18 419.99 errata 1 replacement mother board for Lustre OSS server (by email, quote from Newegg)
T2_US_Florida. 11/30/18 11/30/18 777.00 Storage (plus tax, shipping) spare 6TB SAS drives for the Lustre (by e-mail)
T2_US_Florida. 12/05/18 12/05/18 799.00 Storage (plus tax, shipping) spare 4TB SAS drive for the Lustre (by e-mail)
Total     486468.15    

Tier-2 NSF Invoices for 2017

Site Date of Invoice Amount Link/Comment
T2_US_UCSD 11/06/17 215533.63 Includes 3 T3-in-a-box?
T2_US_MIT 12/31/17 157317.42 MIT_FY17_90289424.pdf
T2_US_Caltech 10/31/17 137814.23 Caltech_FY17_06.pdf
T2_US_Purdue 12/31/17 202390.64 Purdue_FY17_900238143.pdf
T2_US_Nebraska 12/31/17 185334.24 Nebraska_FY17_2605110161.08.pdf
T2_US_Florida 10/19/17 67503.50 CMS_RAID_system_invoice_2017nov.pdf
T2_US_Wisconsin 12/31/17 184000.00 Wisconsin_FY17_09.pdf
Total   1149893.66  

Tier-2 Hardware Purchases in 2017

This year is the first year of the new grant, so there is no carryover money. The hardware budget per site is $200,000, or $1,400,000 in total.

Site Date Requested Date Approved Amount Type Details
T2_US_UCSD 06/29/17 Revised   CPU 32x2 Intel Xeon E5-2660V4 14C/28T = 1,792 batch slots
T2_US_UCSD 07/06/17 07/06/17 178505.56 CPU 28 x 2 Intel Xeon E5-2680V4 14C/28T, 1,586 batch slots
T2_US_Caltech 07/24/17 07/26/17 109888.00 CPU, Storage, and DTN Quote Notes
T2_US_Purdue 09/29/17 09/29/17 202515.64 CPU and Storage Purdue-Quote-2017.pdf
T2_US_Wiscosnin 10/04/17 10/04/17 184000.00 CPU and Storage WISCONSIN-QUOTE-KSCQ15138.pdf
T2_US_Florida 10/12/17 10/13/17 67503.50 Storage FLORIDA-AH7Q6417-J.pdf 720TB RAID storage solution
T2_US_MIT 10/13/17 10/13/17 64160.60 CPU MIT-Quote_US_PC_SC_3000018186937.2_2017-10-10.pdf PowerEdge R730xd ~$10-11/HS06.
T2_US_MIT 10/30/17 10/30/17 67400.00 Storage $46/TB disk quote, WD 6TB drives and trays
T2_US_Nebraska 10/30/17 10/30/17 145619.76 CPU $9.80/HS06 processing
T2_US_Nebraska 11/13/17 11/14/17 39714.48 CPU $9.80/HS06 processing
T2_US_Florida 11/13/17 11/22/17 6879.60 CVMFS For CVMFS over NFS
Total     1066187.14    

JamesLetts - 2017-11-15

Tier-2 NSF Invoices for 2016

Site Date Amount Document
T2_US_UCSD 05/20/2016 298461.76 UCSD_FY15_87983A0029.pdf
T2_US_Florida 05/31/2016 0.00 UofFlorida_FY2016_M000208303.pdf
T2_US_UCSD 09/20/2016 295901.60 UCSD_FY16_87983A0033.pdf
T2_US_Purdue 10/31/2016 275000.00 Purdue-invoice-Oct2016.png
T2_US_MIT 01/11/2017 135465.00 Accounting info, MIT_FY16_90270330.pdf
T2_US_Wisconsin 02/03/2017 163809.56 Table from Invoice 51 Sept 2016
T2_US_Florida 02/09/2017 140000.00 UofFlorida_FY2016_M000220089.pdf
T2_US_Wisconsin 02/20/2017 163053.12 Wisconsin_FY16__MSN0529279.52.pdf
T2_US_Caltech 02/21/2017 147993.15 Calltech_FY16_Voucher_55.pdf
Total   1619684.19  

Tier-2 Hardware Purchases in 2016

Note that some POs are a mixture of 2015 and 2016 money frown

The hardware spending budget for 2016 is $275,000 per site, up from $250,000 in 2015.

Site Date Requested Date Approved Amount Type Details
T2_US_Wisconsin 02/15/16 02/19/16 129000.00 CPU & Storage 2x 10-Core, 128GB RAM, 18TB SATA, 1U
Note that this includes ~$80,000 of 2015 hardware money.
T2_US_UCSD 05/24/2016 05/24/16 292352.71 CPU 52 2x12-core, 128GB RAM, 2U
Lots of carry-over money.
T2_US_Caltech 07/29/16 07/29/16 18030.00 Storage 360TB, 6U, @$50/TB
T2_US_Wisconsin 08/08/2016 08/09/16 326106.24 CPU & Storage 64 2x12-core, 160GB RAM, 20 TB SATA, 1U
Found lots of carryover money at the end of the grant period.
T2_US_Purdue 09/02/16 09/06/2016 282050.69 CPU, Storage & Infrastructure 36 servers with 2x10-core Xeon-E5-2660v3, 128 GB memory, 480 GB local SSD, 25 Gbps and a 5-year warranty, 10 servers with 36 x 6 TB enterprise SAS 12Gb/s HDDs, 2 E5-2603v4 6-Core 1.7GHz CPUs, 32 GB RAM, 12 Gb/s LSI SAS HBA, dual 10 GbE, 2 x 120 GB SSD disks and 5-year warranty, Name servers, HP Servers
T2_US_MIT 10/03/16 10/03/16 135465.00 CPU 30 x 2x10-core Intel Xeon E5-2640, 128 GB RAM each
T2_US_MIT 10/04/16 10/05/16 49998.00 Storage e-mail: 1,200 TB
T2_US_Wisconsin 10/05/16 10/09/16 36000.00 CPU e-mail: HP XL170, each with two 10-core Intel Xeon-E5-2660v3 processors (20 cores per node), 128 GB of memory, 480 GB local SSD, 25 Gbps Ethernet
T2_US_Wisconsin 10/18/16 10/19/16 50386.23 CPU + Storage Similar config to the one from 08/08/2016
T2_US_Caltech 11/04/16 11/05/16 115617.00 CPU Caltech_Quote_2016-11-2_SuperMicro_MicroBlade_6U_EN_1.7_US_price_HP.pdf 2016-10-31_Caltech_Quote_JBOD_and_HGST_drives_EN_01_HP.pdf 2016-11-02_Caltech_Quote_SMC_2U_Server_EN_03_HP.pdf 864 Cores,
T2_US_Nebraska N/A N/A 8019.77 CPU Quote_US_PC_SC_3000002099042.1_2016-11-08.pdf: Quote_US_PC_SC_3000002099042.1_2016-11-08.pdf
T2_US_Nebraska N/A N/A 274996.13 CPU UNIV_OF_NEBRASKA__LINCOLN_S3710v2.xlsm: UNIV_OF_NEBRASKA__LINCOLN_S3710v2.xlsm
Total     1718021.77    

Tier-2 NSF Invoices for 2015

Site Date Amount Document
T2_US_MIT 03/07/16 68442.50 MIT_FY15_90257712.pdf
T2_US_MIT 01/12/16 127067.40 MIT_FY15_90255127.pdf
T2_US_Caltech 01/14/16 180512.28 Calltech_FY15_Voucher_43.pdf
T2_US_Purdue 01/11/16 192500.00 Purdue_FY15_900185656.pdf
T2_US_Purdue 02/10/16 55094.60 Purdue_FY15_900188530.pdf
Total   623616.78  

Tier-2 Hardware Purchases in 2015

Table to keep track of hardware purchases by the Tier-2 sites. Type is CPU, Storage, or Networking. Budget per site is ~$250,000, or $1,750,000 in total over all sites.

Site Date Requested Date Approved Amount Type Details
T2_US_Nebraska 06/23/15   49103.36 Networking Dell Networking Equipment
T2_US_Wisconsin 08/31/15 09/01/15 131739.50 CPU 2x 10-Core, 128GB RAM, 2TB SATA, 1U
T2_US_UCSD 09/14/15 09/15/15 261334.97 CPU + Storage 11 x (2x10x2 cores hyper threaded cores, 128 GB + 12 x 8TB), 2U
T2_US_Purdue 09/11/15 09/15/15 57220.35 Storage 3 x 216 TB (36 x 6 TB enterprise HDDs), 32 GB of RAM, a 12 Gb/s LSI SAS HBA, dual 10 GbE, 2 x 80 GB SSD OS and 5-year warranty + Warranty extension on 2 older servers
T2_US_Purdue 09/11/15 09/15/15 192500.00 CPU 44 HP DL60 compute nodes, each with two 10-core Intel Xeon-E5 processors (20 cores per node), 64 GB of memory, 10 Gbps Ethernet connections and a 5-year warranty
T2_US_Caltech 10/20/15 10/23/15 172224.00 CPU + storage 32 nodes, Dual E5-2650v3 10-core processors, 128GB RAM per node, 3x 8TB drives per node, 768TB total, 40 threads per node, 1280 total slots
T2_US_MIT 10/29/15 10/30/15 125238.00 CPU 28 nodes, Dual Intel Xeon E5-2640 8-core processors, 2x16 threads per node, 896 total batch slots, 3GB RAM/thread:
T2_US_Caltech 04/13/16 04/15/16 15571.00 CPU 4 nodes, Dual Intel Xeon E5-2650 10C/20T per node, 160 slots total, 3.2 GB/slot
T2_US_MIT 07/21/16 07/21/16 67092.00 CPU MIT-4501991298.pdf
T2_US_MIT 07/21/16 07/21/16 5,249.80 Other MIT-4501991298.pdf
Total     1072028.18    

Space Query of US Tier-2 Sites - August 2015

Site Total Space (TB) PhEDEx Used (TB) Local (TB) Declared Free & Usable (TB) Rep Max % Last update
T2_US_Caltech 2000 1080 61 1060 1   2015-08-07
T2_US_Florida 2191 1100 353 317 2 95% 2015-08-05
T2_US_MIT 2000 843 2 0 N/A 90% 2015-08-05
T2_US_Nebraska 2200 1030 7 500 2 90% 2015-08-05
T2_US_Purdue 2150 1200 103 324 2 90% 2015-08-07
T2_US_UCSD 2000 670 53 120 1 90% 2015-08-07.
T2_US_Wisconsin 2250 953 226 0 N/A 90% 2015-08-05
Total 14791 6876 805 2321      

  • Total Space (TB): total usable disk space purchased with U.S. CMS funds. Often data are replicated so that the raw disk space is some factor greater than usable. Replication factors vary by site and technology used, but typically ~2.
  • PhEDEx Used (TB): total disk space used (resident) according to PhEDEx on August 5, 2015.
  • Local (TB): Of the PhEDEx space in the previous column, how much is subscribed to the local group.
  • Declared Free & Usable (TB): How much free space there is at the site that can be effectively used, i.e. most sites need to leave a buffer of O(10%) free space in order to balance the storage, etc. Free space also assumes a replication factor of 2, so that effective space is half raw space.
  • Rep. Replication level the Declared Free & Usable space assumes.
  • Max %: The maximum percentage full at which the storage system can comfortably run.

Space Query of US Tier-2 Sites - January 2017

Site Total Space DDM Quota PhEDEx Used Local Rep Max % User Group
T2_US_Caltech 2820 1425 1382 88 1 90% 256 10
T2_US_Florida 2277 1225 2150 275 2 95% 292 16
T2_US_MIT 2500 2025 2150 1 N/A 90% 974 0
T2_US_Nebraska 3300 1725 1443 45 2 90% 152 2
T2_US_Purdue 3500 2325 1423 19 2 90% 76 180
T2_US_UCSD 2392 1225 1095 8 <2 90% 312 600
T2_US_Wisconsin 3000 1275 1484 301 N/A 90% 474 13
Total 19789 11225   737     2536 821

All disk space values are in TB.

Note that the Total Space reported for UCSD is 4.9PB raw disk and only certain spaces are replicated. Assuming that the remaining free space of 503TB is replicated with 2 copies, then the total space for hosting is 2,392TB.

  • Total Space: total usable disk space purchased with U.S. CMS funds, from our twiki. Often data are replicated so that the raw disk space is some factor greater than usable. Replication factors vary by site and technology used, but typically ~2.
  • DDM: Dynamic data management quota.
  • PhEDEx Used: total disk space used (resident) according to PhEDEx on January 21, 2017.
  • Local (TB): Of the PhEDEx space in the previous column, how much is subscribed to the local group.
  • Rep. Replication level of the data on disk, as declared by the site in 2015.
  • Max %: The maximum percentage full at which the storage system can comfortably run, as declared by the site in 2015.
  • User is the total discovered usage of /store/user.
  • Group is the total discovered usage of /store/group.

Space Query of US Tier-2 Sites throughout 2017

Site Date Usable DDM PhEDEx Local User User Raw Group Group Raw Free
T2_US_Caltech 2017-02-08 2820 1425 1270 88 256 513 10 20 1041
T2_US_Florida 2017-02-08 2277 1125 1403 275 290 342 0 0 587
T2_US_MIT 2017-02-08 4000 2025 2048 1 1024 2048 0 0 950
T2_US_Nebraska 2017-02-08 3300 1725 1618 52 153 458 2 3 1368
T2_US_Purdue 2017-02-08 3500 2325 1956 20 82 247 181 544 892
T2_US_UCSD 2017-02-08 2518 1225 1137 8 288 576 572 1144 521
T2_US_Wisconsin 2017-02-08 3000 1275 1352 257 459 918 13 26 996
T2_US_Caltech 2017-03-01 2820 1425 1352 88 256 513 10 20 1041
T2_US_Florida 2017-03-01 2277 1225 1454 275 287 340 0 0 490
T2_US_MIT 2017-03-01 4000 2025 2212 1 1024 2048 0 0 950
T2_US_Nebraska 2017-03-01 3300 1725 1536 19 153 459 2 3 1401
T2_US_Purdue 2017-03-01 3500 2325 2089 20 79 238 192 577 884
T2_US_UCSD 2017-03-01 2620 1225 1126 9 278 556 583 1166 633
T2_US_Wisconsin 2017-03-01 3000 1275 1485 285 460 920 13 26 967
T2_US_Caltech 2017-04-04 2820 1425 1300 40 256 512 10 20 1089
T2_US_Florida 2017-04-04 2277 1225 1485 276 304 358 0 0 472
T2_US_MIT 2017-04-04 4000 2025 2222 1 1024 2048 0 0 950
T2_US_Nebraska 2017-04-04 3300 2725 2222 23 153 459 2 3 397
T2_US_Purdue 2017-04-04 3500 2325 2191 20 86 258 194 582 875
T2_US_UCSD 2017-04-04 3023 1225 1126 7 308 616 479 958 1110
T2_US_Wisconsin 2017-04-04 3000 1275 1413 285 480 960 13 26 947
T2_US_Caltech 2017-05-09 2820 1425 1300 44 256 512 10 20 1085
T2_US_Florida 2017-05-09 2277 1225 1464 276 303 N/A 0 0 473
T2_US_MIT 2017-05-09 4000 2025 2263 1 1151 2302 0 0 823
T2_US_Nebraska 2017-05-09 3300 2725 2247 23 153 459 2 3 397
T2_US_Purdue 2017-05-09 3500 2325 2212 20 89 267 194 582 872
T2_US_UCSD 2017-05-09 2762 1225 1106 7 318 636 484 968 854
T2_US_Wisconsin 2017-05-09 3000 1275 1505 285 478 956 13 26 949
T2_US_Caltech 2017-06-09 2820 1425 1270 54 255 510 10 20 1076
T2_US_Florida 2017-06-09 2227 1225 1377 45 306 N/A 0 0 651
T2_US_MIT 2017-06-09 4000 2025 2129 1 1128 2256 0 0 846
T2_US_Nebraska 2017-06-09 3300 2725 2314 23 153 459 15 30 384
T2_US_Purdue 2017-06-09 3500 2325 2041 20 95 285 194 582 866
T2_US_UCSD 2017-06-09 2966 1225 1064 8 340 680 485 970 797
T2_US_Wisconsin 2017-06-09 3000 1275 1357 225 480 960 13 26 1007
T2_US_Caltech 2017-07-06 2820 1425 1290 54 255 510 10 19 1076
T2_US_Florida 2017-07-06 2227 1425 1526 266 106 N/A 0 0 430
T2_US_MIT 2017-07-06 4000 2025 2171 1 862 1724 0 0 1112
T2_US_Nebraska 2017-07-06 3300 2725 2458 23 156 469 2 3 394
T2_US_Purdue 2017-07-06 3500 2325 2079 20 101 303 194 582 860
T2_US_UCSD 2017-07-06 3061 1225 1096 8 350 700 485 970 866
T2_US_Wisconsin 2017-07-06 3000 1425 1423 167 491 982 13 26 904

All space quantities are expressed in TB.

  • Usable are site-provided numbers from the twiki for usable disk space. UCSD reports raw disk, so space for hosting is calculated by summing actual hosted data size plus free space. UCSD also has 264 TB in an xrootd cache (as of July 6, 2017) which is used for caching MiniAOD available for analysis jobs.
  • DDM: Dynamic data management quota.
  • PhEDEx: PhEDEx total resident.
  • Local: Part of the PhEDEx resident space subscribed from the local group.
  • User: Space of /store/user files, as reported in spacemon.
  • User Raw: Raw disk usage of /store/user, as reported in spacemon.
  • Group: Space of /store/group files, as reported in spacemon.
  • Group Raw: Raw disk usage of /store/group, as reported in spacemon.
  • Free is the presumed free disk space at the site, simply: Deployed minus DDM, Local, User, and Group spaces. Note, however, that most storage technologies require some buffer free space (~10% of the total) in order to function properly, and Free includes this 10% buffer. Free also includes the temporary buffers! Also includes orphaned files at sites, which CP indicated might be O(250TB) at some sites.

New script to automatically generate the table above. DDM numbers are static for the year (April to April) and put in by hand. Usable space numbers are provided by the sites on a twiki.

#!/bin/sh

SITES="T2_US_Caltech T2_US_Florida T2_US_MIT T2_US_Nebraska T2_US_Purdue T2_US_UCSD T2_US_Wisconsin"

run_rabbit_run() {

SITE=$1

URL="https://twiki.cern.ch/twiki/bin/view/CMSPublic/USCMSTier2Deployment"
TMPFILE=`mktemp -t DEPLOY.html.XXXXXXX`
curl -o $TMPFILE $URL >> /dev/null 2>&1
xmllint --format $TMPFILE --html 2>&1 | \
  grep twikiTableCol4 | head -8 | tail -7 | awk '{print $6}' |
  awk '
BEGIN {
  name[1]="Caltech"
  name[2]="Florida"
  name[3]="MIT"
  name[4]="Nebraska"
  name[5]="Purdue"
  name[6]="UCSD"
  name[7]="Wisconsin"
  ddm[1]=1425
  ddm[2]=1425
  ddm[3]=2500
  ddm[4]=2725
  ddm[5]=2325
  ddm[6]=1225
  ddm[7]=1425
  n=1
}
{
  print "T2_US_"name[n]","$1","ddm[n]
  n+=1
}' | grep $SITE | tr '\n' ','
rm $TMPFILE

#URL="http://dynamo.mit.edu/dynamo/detox.php?partitionId=10"

URL="https://cmsweb.cern.ch/phedex/datasvc/xml/prod/nodeusage?node=${SITE}"
curl -k $URL 2>&1 | tr ' ' '\n' | grep noncust_node_bytes | \
  awk -F\' -v site=$SITE \
  'BEGIN{bytes=0} 
   {bytes+=$2}
   END{print bytes/1024./1024./1024./1024.}' | tr '\n' ','
URL="https://cmsweb.cern.ch/phedex/datasvc/xml/prod/groupusage?node=${SITE}&group=local"
curl -k $URL 2>&1 | tr ' ' '\n' | grep node_bytes | awk -F\' \
  'BEGIN{bytes=0} 
   {bytes+=$2}
   END{print bytes/1024./1024./1024./1024.}' | tr '\n' ','

URL="https://cmsweb.cern.ch/dmwmmon/datasvc/perl/storageusage?node=$SITE"
TMPFILE=`mktemp -t SPACEMON.html.XXXXXXX`
curl -k -o $TMPFILE $URL >> /dev/null 2>&1
DAYSOLD=`cat $TMPFILE | grep TIMESTAMP | \
  awk -F\' \
    'BEGIN{x=0;sign=-1}{x+=sign*$4;sign=1}END{print int(x/86400.)}'`
if [ $DAYSOLD -gt 7 ] ; then
  echo "WARNING!: $SITE spacemon information is $DAYSOLD days old." 1>&2
fi
BUFFER1=""
BUFFER2=""
BUFFER3=""
for LINE in `cat $TMPFILE`; do
  echo $LINE | grep "store\/user\/'" >> /dev/null 
  STOREUSERrc=$?
  echo $LINE | grep "store\/group\/'" >> /dev/null
  STOREGROUPrc=$?
  if [ $STOREUSERrc -eq 0 ] ; then
     STOREUSER=`echo $BUFFER3 | awk -F\' '{print $2/1024./1024./1024./1024.}'`
  fi
  if [ $STOREGROUPrc -eq 0 ] ; then
     STOREGROUP=`echo $BUFFER3 | awk -F\' '{print $2/1024./1024./1024./1024.}'`
  fi
  BUFFER3=$BUFFER2
  BUFFER2=$BUFFER1
  BUFFER1=$LINE
done
echo $STOREUSER","$STOREGROUP
rm $TMPFILE
return
}

echo "Site,Usable,DDM,PhEDEx,Local,User,Group,Free"
DATE=`date +%F`
for SITE in $SITES ; do
  if [ $SITE == "T2_US_UCSD" ] ; then
    FREE=`df -m /hadoop | tail -1 | awk '{print $4/1024./1024.}'`
    run_rabbit_run $SITE | awk -F\, -v free=$FREE -v date="$DATE" \
      '{print $1","date","$3+$5+$6+$7+free+288","$3","$4","$5","$6","$7","free}'
  else
    run_rabbit_run $SITE | awk -F\, -v date="$DATE" \
      '{print $1","date","$2","$3","$4","$5","$6","$7","($2-$3-$5-$6-$7)}'
  fi
done

exit

Site Date Usable DDM PhEDEx Local User Group Free
T2_US_Caltech 2017-08-11 2355 1425 1193.06 48.8729 255.494 9.56874 616.064
T2_US_Florida 2017-08-11 2277 1425 1416.31 242.175 102.623 0 507.202
T2_US_MIT 2017-08-11 4000 2500 2378.15 0.938024 998.5 0 500.562
T2_US_Nebraska 2017-08-11 3300 2725 2277.32 20.5366 160.506 1.59234 392.365
T2_US_Purdue 2017-08-11 3500 2325 1984.75 18.3657 104.164 196.467 856.003
T2_US_UCSD 2017-08-11 2984.66 1225 989.686 0.644235 300.239 485.633 685.147
T2_US_Wisconsin 2017-08-11 3000 1425 1381.25 172.952 544.168 13.1024 844.778
    21416.66 13050 11620.526 504.484459 2465.694 706.36348 4402.121

  • WARNING!: T2_US_Caltech spacemon information is 35 days old.
  • WARNING!: T2_US_Florida spacemon information is 14 days old.
  • N.B. These quantities are in proper TB (TB/GB=1,024), unlike the previous table, where 1PB=1,000,000,000,000 bytes for the PhEDEx quantities.

JamesLetts - 2019-01-31

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf 2016-10-31_Caltech_Quote_JBOD_and_HGST_drives_EN_01_HP.pdf r1 manage 580.7 K 2016-11-05 - 13:56 JamesLetts  
PDFpdf 2016-11-02_Caltech_Quote_SMC_2U_Server_EN_03_HP.pdf r1 manage 582.3 K 2016-11-05 - 13:56 JamesLetts  
PDFpdf 93ACC.pdf r1 manage 93.5 K 2015-09-01 - 23:53 JamesLetts  
PDFpdf AH8Q7141-A.pdf r1 manage 95.3 K 2018-04-11 - 10:52 JamesLetts  
PDFpdf CMS_RAID_system_invoice_2017nov.pdf r1 manage 60.8 K 2018-03-10 - 07:50 JamesLetts  
PDFpdf CX-75646__11_x_4-in-one_Servers_discounted_as_configured_SEPT_14th.pdf r1 manage 127.8 K 2015-09-15 - 16:15 KevinLannon  
PDFpdf CX-88958.pdf r1 manage 105.3 K 2017-06-29 - 23:37 JamesLetts  
PDFpdf CX-89326.pdf r1 manage 105.0 K 2017-07-07 - 00:16 JamesLetts 28 x 2 Intel Xeon E5-2680V4 14C/28T, 1,586 batch slots.
PDFpdf Calltech_FY15_Voucher_43.pdf r1 manage 104.0 K 2016-02-19 - 15:46 JamesLetts  
PDFpdf Calltech_FY16_Voucher_55.pdf r1 manage 336.3 K 2017-02-21 - 19:46 KevinLannon  
PDFpdf Caltech-AH6Q4807.pdf r1 manage 159.7 K 2016-07-29 - 21:04 JamesLetts 360TB storage device
PDFpdf Caltech-Tier2-Upgrade-2017.pdf r1 manage 275.6 K 2017-08-01 - 21:55 KevinLannon  
PDFpdf Caltech_FY17_06.pdf r1 manage 56.6 K 2018-04-11 - 11:00 JamesLetts  
PDFpdf Caltech_Quote_2016-04-08_SuperMicro_2U_TWIN_PRO_4_node_hp.pdf r1 manage 159.9 K 2016-04-16 - 01:19 KevinLannon  
PDFpdf Caltech_Quote_2016-11-2_SuperMicro_MicroBlade_6U_EN_1.7_US_price_HP.pdf r1 manage 611.2 K 2016-11-05 - 13:56 JamesLetts  
PDFpdf Caltech_quote_2015.pdf r1 manage 64.8 K 2015-10-30 - 21:15 KevinLannon  
PDFpdf Dell_-_192GB.pdf r1 manage 66.9 K 2017-10-31 - 17:17 JamesLetts  
PDFpdf FLORIDA-AH7Q6417-J.pdf r1 manage 89.2 K 2017-10-13 - 23:39 JamesLetts  
PDFpdf KSCQ12717.pdf r1 manage 103.8 K 2015-09-01 - 22:15 JamesLetts file
PDFpdf KSCQ14004.pdf r1 manage 45.8 K 2016-08-09 - 22:23 JamesLetts  
PDFpdf KSCQ14169-A-1.pdf r1 manage 48.1 K 2016-11-22 - 22:43 KevinLannon  
PDFpdf KSCQ16176.pdf r1 manage 79.5 K 2018-10-08 - 20:38 JamesLetts  
PDFpdf MIT-4501991298.pdf r1 manage 66.3 K 2016-07-22 - 00:40 JamesLetts  
PDFpdf MIT-4502001608.pdf r1 manage 45.5 K 2016-07-22 - 00:40 JamesLetts  
PDFpdf MIT-Quote_US_PC_SC_3000018186937.2_2017-10-10.pdf r1 manage 66.1 K 2017-10-13 - 23:39 JamesLetts  
PDFpdf MIT_FY15_90255127.pdf r1 manage 88.7 K 2016-03-17 - 18:34 KevinLannon  
PDFpdf MIT_FY15_90257712.pdf r1 manage 128.9 K 2016-06-20 - 18:39 JamesLetts  
PDFpdf MIT_FY16_90270330.pdf r1 manage 109.8 K 2017-01-13 - 17:50 JamesLetts MIT invoice
PDFpdf MIT_FY17_90289424.pdf r1 manage 84.2 K 2018-01-30 - 21:21 JamesLetts  
PDFpdf MIT_FY18_90302976.pdf r1 manage 88.6 K 2019-02-01 - 03:20 JamesLetts  
PDFpdf MIT_Quote_Summary_717505514.pdf r1 manage 100.9 K 2015-10-31 - 02:12 JamesLetts MIT quote
PDFpdf Nebraska_FY17_2605110161.08.pdf r1 manage 161.6 K 2018-03-01 - 23:57 JamesLetts  
PDFpdf Purdue-Quote-2017.pdf r1 manage 43.6 K 2017-09-29 - 20:09 JamesLetts  
PNGpng Purdue-invoice-Oct2016.png r1 manage 269.0 K 2016-11-04 - 13:03 JamesLetts  
PDFpdf PurdueCMSClusterServices2016.pdf r1 manage 99.6 K 2016-09-20 - 17:47 KevinLannon  
PDFpdf PurdueHPQuote-Sept6_2016.pdf r1 manage 1552.1 K 2016-09-20 - 17:47 KevinLannon  
PDFpdf Purdue_FY15_900185656.pdf r1 manage 211.9 K 2016-02-08 - 16:42 JamesLetts  
PDFpdf Purdue_FY15_900188530.pdf r1 manage 214.6 K 2016-03-15 - 16:24 JamesLetts invoice
PDFpdf Purdue_FY17_900238143.pdf r1 manage 238.4 K 2018-02-21 - 21:06 JamesLetts  
PDFpdf Purdue_Storage_quote.pdf r1 manage 48.8 K 2016-09-20 - 17:47 KevinLannon  
PDFpdf Purdue_namenode_quote.pdf r1 manage 100.3 K 2016-09-20 - 17:47 KevinLannon  
PDFpdf Quote2CRSITier2Gold6138Systems2U4Nodes_AlainWilmouth073018.pdf r1 manage 219.0 K 2018-07-31 - 22:47 JamesLetts  
PDFpdf Quote2CRSITier2Gold6138andHigherSystems2U4Nodes_AlainWilmouth080718.pdf r1 manage 336.5 K 2018-08-09 - 21:05 JamesLetts  
PDFpdf Quote_CMSIO3.pdf r1 manage 67.7 K 2017-11-23 - 04:01 JamesLetts UFL
PDFpdf Quote_US_PC_SC_3000000692965.1_2016-09-30.pdf r1 manage 116.4 K 2016-10-03 - 20:40 JamesLetts  
PDFpdf Quote_US_PC_SC_3000002099042.1_2016-11-08.pdf r1 manage 90.4 K 2017-07-06 - 02:41 JamesLetts  
PDFpdf Quote_US_PC_SC_3000019537998.1_6xR440.pdf r1 manage 88.5 K 2017-11-15 - 11:12 JamesLetts  
PDFpdf Quote_US_PC_SC_3000027191643.1_2018-07-25.pdf r1 manage 60.7 K 2018-07-31 - 22:47 JamesLetts  
PDFpdf Quote_US_PC_SC_3000029588584.1_2018-10-05.pdf r1 manage 67.0 K 2018-10-09 - 00:46 JamesLetts  
PDFpdf UCSD-CX-80235-2.pdf r1 manage 69.2 K 2016-05-30 - 21:56 JamesLetts  
PDFpdf UCSD_FY15_87983A0029.pdf r1 manage 85.7 K 2016-08-04 - 20:42 JamesLetts  
PDFpdf UCSD_FY16_87983A0033.pdf r1 manage 324.0 K 2016-09-20 - 18:15 KevinLannon  
PDFpdf UCSD_FY17_A0005.pdf r1 manage 177.4 K 2017-11-15 - 15:33 JamesLetts  
Unknown file formatxlsm UNIV_OF_NEBRASKA__LINCOLN_S3710v2.xlsm r1 manage 87.7 K 2017-07-06 - 02:41 JamesLetts  
PDFpdf UofFlorida_FY2016_M000208303.pdf r1 manage 125.2 K 2016-08-04 - 20:42 JamesLetts  
PDFpdf UofFlorida_FY2016_M000220089.pdf r1 manage 150.4 K 2017-02-09 - 14:46 JamesLetts  
PDFpdf WISCONSIN-QUOTE-KSCQ15138.pdf r1 manage 45.5 K 2017-10-06 - 05:35 JamesLetts  
PNGpng WISCONSIN-invoiceJan17.png r1 manage 130.7 K 2017-02-03 - 15:22 JamesLetts  
PDFpdf Wisconsin_FY16__MSN0529279.52.pdf r1 manage 58.0 K 2017-02-21 - 19:46 KevinLannon  
PDFpdf Wisconsin_FY17_09.pdf r1 manage 45.7 K 2018-04-09 - 17:44 JamesLetts  
PDFpdf Wisconsin_FY18_20.pdf r1 manage 44.0 K 2019-02-01 - 03:20 JamesLetts  
PDFpdf Wisconsin_quote_KSCQ13366.pdf r1 manage 120.6 K 2016-02-19 - 16:14 JamesLetts  
Texttxt caltech_details.txt r2 r1 manage 2.1 K 2017-08-01 - 22:02 KevinLannon  
PNGpng image001.png r1 manage 16.7 K 2017-01-13 - 17:50 JamesLetts MIT invoice
Texttxt purdue_quote_20180705.txt r1 manage 1.1 K 2018-07-05 - 19:56 KevinLannon  
Texttxt quote.txt r1 manage 0.4 K 2017-10-31 - 17:17 JamesLetts  
Edit | Attach | Watch | Print version | History: r99 < r98 < r97 < r96 < r95 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r99 - 2019-02-01 - JamesLetts
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback