Question
Question 1. Please describe your underlying media (e.g., SSD, HDD, tape), “enterprise” vs “consumer”, SATA vs SAS, shingled?, HAMR?, and any other significant aspect to the drives.For each storage type, please also give the (approximate) total media capacity of the underlying media you have currently in deployment. This is sometimes called the “gross” or “raw” capacity. For example, a RAID-6 system with 12 drives, each drive having a 1
TiB capacity, would have a total media (“raw”) capacity of 12
TiB.
Answers
CERN
Disk
EOS:
Enterprise SATA disks deployed as JBOD, no RAID. All PMR.
Capacity: ~270PB raw.
Mixture of disk generations, 2.5TB -> 12TB.
Next purchase will probably be 14TB.
Ceph
Enterprise SATA disks deployed as JBOD.
Capacity: ~15PB raw.
SSDs deployed for journaling and caching (~1% of capacity)
SSD not used as part of tiered storage
Two all-flash instances (for HPC, block storage)
Castor
Enterprise SATA disks deployed as RAID1.
SSDs on tape server machines to aid in repack (not exposed to users).
The Castor “public” instance uses a Ceph pool, part of which is 4+2 EC and part of which is 2 replicas (intended as a performance comparison).
~30PB raw.
Tape
Castor & Backup
Libraries : 4 IBM (3 Enterprise, 1 LTO) and 1 Oracle
Drives: 102
Cartridges: 31k
Used space : ~330PB
hephy-Vienna
Entereprise SAS, Current system about 800 TB, maybe 1 PB raw. We are planning to move to a new system this year, about 3 PB raw.
KI-LT2-QMUL
20 Dell
R7430XD (16*6 TB NL-SAS/ server) + 58 Dell R510s (12*2TB NL-SAS /server) + 12 Dell R510s (12*3TB NL-SAS/ server) + 2 HPE APPLLO4000 (24*8TB NL-SAS / server) + 4 HGST 4U60G2 arrays (240*12TB NL-SAS total)
UKI-LT2-RHUL
Media are all enterprise, mix of SATA and SAS. Total raw capacity is approx 1.7 PB.
RO-13-ISS
PMR SATA HDD, enterprise, 4 - 12 TB, 1.5
PiB
Nebraska
Our storage is a Mix of SAS/SATA drives, 5.25" Enterprise quality ranging from 2 to 10 TB in size. Some drives are as old as 8 years, some less than two years. In total we have 7.07 PB of disk active on the floor.
INFN-ROMA1
All SAN (FC) systems used in DAS mode. All SAS disks, with variable sizes up to 10TB, for a total of ~1.6 PB of usable space (~2PB raw disk space). A separate object storage facility on SAS disks running on mixed systems (SAN, DAS, JBODs) is also available, having a raw disk space ~200 TB. Another gluster facility is also available with about 100 TB raw disk space. All disks are enterprise class.
NDGF-T1
Nearline SAS mostly. We don't operate the raw area, so it is hard to tell exactly what hardware. An estimate for total capacity 14PB.
BEgrid-ULB-VUB
MASS STORAGE:
* Enterprise HDD, SATA, 7.2k, from 3 to 12 TB per disk.
* In raid6, using graps of 10 or 12 disks.
* Latest purchase will have 14 disks of 12TB, so 168TB raw.
USER STORAGE:
* Enterprise HDD, SATA, 7.2k, 3TB
* read cache on SSD, write cache on
ZeusRAM
VM STORAGE:
* Enterprise HDD, SATA, 10k, 600GB
* read cache on SSD, write cache on
ZeusRAM
NCG-INGRID-PT
HDD and SSD both SATA enterprise grade. Total raw capacity 4PB, WLCG tier-2 raw capacity 500TB.
Our storage facility used mostly SAS nearline (Enterprise) hard drives. The total media capacity is of 3.2 PB.
LRZ-LMU
HDD, Consumer, SATA, shingled, 12 * 8 TB
CA-WATERLOO-T2
Using storage building blocks from DELL for dcache pools - dual sled config with 45 8TB enterprise SATA drives each in 4 pools x 11 drives in RAID6 with one hot spare. After configuration have about 274TB per server. Total 8 sleds.
Part of a much larger general purpose cluster using Lustre for /home, /scratch and /project for thousands of users. Not much of this is utilized for grid computing. Have 2x480GB SSD's on compute nodes for local storage needs and grid jobs.
CA-VICTORIA-WESTGRID-T2
Primarily enterprise HDD, ranging from 8 TB disks (newer, 720 TB raw) to 3 TB (older, 900 TB raw), SATA or nearline SAS
Taiwan_LCG2
ASGC uses Near-line SAS drives or Enterprise SATA(mostly near-line SAS) with SMR technology.
For in deployment storage capacity, it's ~5PB.
13 servers with 12 to 16 8TB-SATA disks. Total raw capacity: 1632 TB
asd
MPPMU
INFN-LNL-2
All our media is composed by enterprise SAS HDD (4TB older ones, 8TB newer). Total raw capacity is ~6500 TB
Australia-ATLAS
HDD enterprise SAS. 1.4PB. 26 storage nodes
Infortrend external RAID with JBOD chain, 120 12TB HDDs per unit, 8 units, total net capacity 6PB for Storage Element. For local cache, 750TB in
CephFS with size=2 on older hardware without RAID on 9 servers with 30 HDDs each.
KR-KISTI-GSDC-02
869TB SAS HDD Disk thought SCSI network
UKI-LT2-IC-HEP
HDD, 10PB
from 34 to 60 drives of capacities from 3 to 12
TiB
UKI-SOUTHGRID-BRIS-HEP
HDD SAS 10
TiB
GR-07-UOI-HEPLAB
HDD, SATA, newer disks are enterprise class (~100TiB) while the older ones are consumer disks (~100TiB)
UKI-SOUTHGRID-CAM-HEP
Enterprise SATA HDD. 300TB raw capacity
USC-LCG2
HDD SATA, no raid and 1TB capacity. It is only used for ops VO as part of requirements of a Tier2 since the only VO we support, LHCb, does not require us to provide storage space
EELA-UTFSM
HDD, SAS: 36TB ; HDD, SATA: 520TB
DESY-ZN
Enterprise NL-SAS, 120TB raw per system
PSNC
DPM : SATA HDD , 0.500TiB, Xrootd : SATA HDD, 60TiB, dCache: HDD 1PiB, tape : 10PB
UAM-LCG2
SATA3 and SAS3 disks. 2PiB total raw capacity.
T2_HU_BUDAPEST
SATA HDD, enterprise and consumer mixed, 1.7
PiB
INFN-Bari
Enterprise SATA - about 8TB RAW each disk
IEPSAS-Kosice
HDD , enterprise, SATA LSI
MegaRaid / 6/12 gb ,
EOS RAID-0 JBOD with 48 drivers , each having 4TB capacity a total media capacity 192 TB and 72 drivers , each having 6 TB capacity a total media capacity 624 TB
XROOTD RAID-6 with 144 drivers , each having 2 or 3 or 4 or 6 TB capacity a total media capacity 528 TB
dCache RAID-6 with 288 drivers , each having 2 or 3 or 4 or 6 TB capacity a total media capacity 1224 TB
The main underlying media for WLCG storage are:
Disk infrastructure: DAS (Direct Attach System) with HDD SATA type enterprise for a total of 29 248
TiB of raw capacity
Tapes infrastructure: 4 Oracle libraries with 50 drives. We use only Enterprise T10k media, with this media capacity the full capacity of the infrastructure is 300 PB. Today the libraries is not fully populated of tapes, but 56 500
TiB are available for WLCG.
NONE_DUMMY
blah
WEIZMANN-LCG2
enterprise near-line SATA, ~1.3 PB (DDN7000 system)
RU-SPbSU
HDD, SATA, 24 x 3,6 TB = 86,4
USCMS_FNAL_WC1
We have storage in production dating back 10 years. Believe correct answer to above question might be "yes". Generally deploy enterprise level RAID-6 storage arrays fronted by a server running dCache connected by fibre-channel. All/each node with 10 GE network, size of each storage node ranges from 100 TB to close to a PB raw, depending on age.
RRC-KI-T1
16 enclosures with SAS 2GiB disks connected by SATA (12 disk per enclosure) ~ 384
TiB RAW
23 enclosures with SAS 2GiB disks connected by iSCSI (12 disk per enclosure) ~ 552TiB RAW
3+16+5 3U storage servers with SATA 2GiB (16 per server) ~ 768
TiB RAW
33+30+13 3U storage servers with SATA 6GiB (16 per server) ~ 7296
TiB RAW
13+11+6 +3U storage servers with SATA 10GiB (16 per server) ~ 4800
TiB RAW
E07/E08 IBM tapes ~15PB RAW
vanderbilt
consumer 7200 RPM drives with SAS interfaces. we buy whatever capacity minimizes $/GB
UNIBE-LHEP
HDD enterprise: 1350 TB (SE) + 298 TB (ARC cache)
CA-SFU-T2
HDD, entreprise, SATA, 3.7 PB
_CSCS-LCG2
dCache: 4.8PiB, NL-SAS / scratch: SAS-SSD 90TiB + NL-SAS 500TiB (with Spectrum Scale tiering)
T2_BR_SPRACE
mix of NL-SAS and SATA HDD, mainly SATA, 696 disks summing up 2724TB
T2_BR_UERJ
We have HDFS in our entire cluster. Mixed Near Line SATA/SAS drives each one having 1TB, 2TB and 4TB capacity. We have total media capacity of 700TB
GSI-LCG2
HDD, enterprise, SAS, ~31
PiB (raw capacity, shared) out of which 2.8
PiB reserved to ALICE
UKI-NORTHGRID-LIV-HEP
All storage is HDD/SATA. Some RAID6, some ZFS.
RAID6: 1.2 PB
ZFS: 0.5 PB
Total: 1.7 PB
CIEMAT-LCG2
At the moment, we use only HDD, although we are evaluating SSD for certain services (but not really for storage), of 'enterprise' quality, SAS (and some legacy SATA), Helium disks (in recent purchases), 4Kn blocks.
The total raw capacity is ~ 2.5
PiB.
a
T2_US_Purdue
Enterprise class HDDs for all production storage - 8806TB RAW
A few shingled (SMR) disks in testing phase.
High perf. High Avail. Storage Service
- SAN
PowerVault DELL MD3820f
- MD1200 x 7 with SAS disks x 12 (4
TiB)
- Total raw capacity : 336
TiB
High capacity Storage Service
- DELL RAID-6 servers (X 27) with SAS disk (8
TiB, 10
TiB)
- Total raw capacity : 2600
TiB
TRIUMF-LCG2
For the Canadian ATLAS Tier-1 data centre, we have 3 Data Direct Networks (DDN) storage systems for the pledged disk capacity and 1 dcs3860 for HSM disk buffer, all use enterprise SAS drives.
One
DDN12KX with 405 4TB SAS drives, plus 4 SSD drives as caching. Total raw capacity 1582TiB.
Two
DDN14KX with 500 12TB SAS drives each, Total raw capacity 11719TiB.
One dcs3860 with 60 8TB SAS drives, and 60 3TB drives. Total raw capacity 645TiB
One IBM tape library, with 1400 LTO8 tape cartridges, 1000 LTO7 cartridges, 3319 LTO6 tape cartridges, total capacity 30369TiB
Summary: total disk raw capacity is 13101TiB, Total tape media raw capacity is 30369TiB, with 645TiB disk buffer.
KR-KISTI-GSDC-01
We use mostly "enterprise" NL-SAS(SATA disk with SAS interface) HDDs for "disk" storage. We are operating 3 different storages: 1) 2,368TB (physical, 4TB disk) and 1,687TB (usable, RAID-6), 2) 2,592TB (physical, 8TB disk) and 1,615TB (usable, RAID-6), 3) 1,070TB (physical, 10TB disk) and 650TB (usable, RAID-6). Those were configured with different RAID-6 combination: 1) 6D+2P, 2) and 3) 4D+2P
GRIF
Enterprise, CMR, SATA and NL-SAS, 7.5 PB
SATA server with attachement - 6 servers with 12 disks of 8
TiB + attachments 12 disks of 8 Tib. ( 192 Tib x6 = 1152 Tib)
- 4 servers with 16 disks of 10 Tib + attachments 12 disk of 10 Tib ( 280 Tib x 4= 1120 tib)
-1 servers with 16 disks of 8 Tib + attachments
12 disks of 8 Tib ( 224 Tib)
-4 servers with 12 disks of 4 Tib + att 12 disks of 4 Tib ( 96 Tib x4= (384 Tib)
- 5 servers with 16 disks of 4 Tib + att 12 disks of 4 Tib ( 112Tib x5= (560Tib)
Total disks raw :3440 Tib
HDD entreprise SAS nearline, RAID-6 2.3 PB raw
SATA RAID6 raw = 2400
TiB
ZA-CHPC
RAID-6 with 12 drives (4TB), total raw capacity of 630TB
JINR-T1
NL SAS HDD enterprise, from 144 to 160 TB; IBM TS3500 tape robot 11PB.
praguelcg2
HDD, mixture of 3 - 12 TB disks, server + JBODs, total site RAW capacity around 6
PiB
UKI-NORTHGRID-LIV-HEP
All storage is Enterprise HDD, some SATA some SAS. Mostly RAID6, some ZFS.
RAID6: 1.3 PB
ZFS: 0.6 PB
Total: 1.9 PB
INDIACMS-TIFR
HDD, Consumer, SATA, drives in RAID 6 with drive capacity varying from 4 TB to 10 TB
TR-10-ULAKBIM
HDD, SATA, 360TiB + HDD, SATA, 360TiB
prague_cesnet_lcg2
Consumer SATA drives 280TiB
TR-03-METU
HDD, SATA, 672TiB
aurora-grid.lunarc.lu.se
IBM Spectrum Scale as a centrum storage.
SARA-MATRIX_NKHEF-ELPROD__NL-T1_
NLSAS HDD (17.5PB net capacity), Tape 82PB
HDD enterprise SAS+SATA, 2, 4, 6, 10 TB/disk, raw capacity 1592 TB (including hot-spares)
DESY-HH
SAS disks for data, tapes for archiving, SSDs for system disks and for online data taking at the experiments.
~22PB on SAS disks.
T3_PSI_CH
~1 PB mostly of "consumer" storage based on SATA drives and 250TB of
NetApp
SAMPA
HDD, HP, SAS MDL, 960TiB and 350TiB
INFN-T1
SSD(SAS, Entreprise, metadata) - 40TB; HDD(NL-SAS; Enterprise, data) - 46PB raw, ~20% of HDD have 4K sectors (4KN); Tape (Enterprise) - 80PB
GLOW
Enterprise SATA HDD - raw space ~ 8.2 PB
UNI-FREIBURG
1490 nearlineSAS enterprise HDD,
4.34 PB raw
Ru-Troitsk-INR-LCG2
SATA RAID6 Raw=420TiB
T2_Estonia
HDFS: Configured capacity 4.55 PB, 2.2PB usable, replica 2, some local users folders are replica 3). Consumers disks at least NAS level ( WD red, Seagate Ironwolf, some Seagate Enterprise series) as JBOD. Sizes vary from 2TB, 3TB, 6TB to 10TB. Three disk per node, 200+ nodes (~630 disks total) (nodes share compute resources) and 11 storage nodes (5x24 disk (2TB/3TB), 6x 32 (3TB/6TB).
Ceph: 4 nodes, each node have 8x Seagate EXOS 4TB, 2x Intel P3600 NVME (PCIe) is for journal (for disks) and left over space for seperate pool (for RBD or cephfs metadata). 3x Samsung PM1725a (hot swap) for compute node scratch. Combined 128TB raw disk, 3,2TB Intel NVME and 74TB Samsung NVME.
Home (cephfs): usable 29TB, disk tier with nvme journal
VMs( proxmox, openstach)(rbd) (: usable 29TB, has also Intel NVME cache tier, usable max 517GB.
Scratch (cephfs): usable 46TB, it run erasure coding at the moment but we plan to move replica 1 for scratch.
Usable capacity depends on replica or erasure coding. Mostly we use replica 3. We have 11 pools in ceph at the moment.
pic
Disk → Usable capacity of 9.75 PB for disk space, which is used by LHC and non-LHC experiments (~15 VOs), managed by dCache. The servers are based on commodity hardware, equipped with HDDs SATA or SAS (newer ones, which are 70% of total capacity) which vary in size (from 2 TB to 10 TB), in the range of 50-350 TB/server. The gross or raw capacity of all of the deployed servers is 11.2PB. Disks are configured on RAID6 structure. The number of disks in RAIDs depend on the total number of discs per server. We use to create more than on RAID per server.
Tape → Usable and raw capacity of 26.3 PB for tape space, which is used by LHC and non-LHC experiments (~10 VOs), managed by Enstore. The technologies used comprise LT05,
T10KC, and
T10KD, in a SL8500 tape library equipped with 4 LT05, 8
T10KC and 6
TK10KC drives.
ifae
The ifae site is hosted at PIC. Disk → Usable capacity of 9.75 PB for disk space, which is used by LHC and non-LHC experiments (~15 VOs), managed by dCache. The servers are based on commodity hardware, equipped with HDDs SATA or SAS (newer ones, which are 70% of total capacity) which vary in size (from 2 TB to 10 TB), in the range of 50-350 TB/server. The gross or raw capacity of all of the deployed servers is 11.2PB. Disks are configured on RAID6 structure. The number of disks in RAIDs depend on the total number of discs per server. We use to create more than on RAID per server.
NCBJ-CIS
Hybrid storage with SAS HDD (7080TB raw) for data and SAS SSD (17TB) for metadata.
Disk:
Enterpise quality HDD with a capacity between 6TB and 12TB. In total there is 54PB raw capacity.
Tape:
Oracle
T10KD media. Raw capacity 59PB.
T2_IT_Rome
HDD with dCache. Total of 1050 TB currently available.
BNL-ATLAS
Our mass disk storage for ATLAS T1 is mainly enterprise SAS PMR, with a small fraction of SATA SMR. NVME/SSD are used on core servers like database etc, and caching servers like XCache. The total capacity is ~27PB. We also have a TAPE system, which uses Oracle SL8500 library, LTO series tape drives and tapes. So far we have ~46PB data on tape for ATLAS. If we also count other non-WLCG communities, the total data on tape is >100PB.
FZK-LCG2
~45PB raw capacity on nearline-SAS disks for GPFS data (no SMR/MAMR/HAMR)
~90TB raw capacity on SAS SSDs for GPFS meta data (not directly user accessible)
~55PB tape capacity used (
T10K-D)
INFN-NAPOLI-ATLAS
HDD SAS enterprise (4 or 8 TB each) = raw capacity 2300
TiB.
HDD SATA enterprise (3
TiB each) = raw capacity 576
TiB.
--
OliverKeeble - 2019-08-22